Tagged in

data protection

Rubrik -  - Now In Rubrik: Enhanced Performance, Flexibility, and Compliance for Enterprise Environments


Now In Rubrik: Enhanced Performance, Flexibility, and Compliance for Enterprise Environments

As enterprises continue to search for ways to leverage and unlock new value from their data, IT teams face a new set of challenges. A surge in data silos, data growth, and multiple protection tools have increased complexity and risk for data protection. With time-consuming management and costly maintenance, legacy backup and recovery solutions can be roadblocks for IT teams to ensure agility and business continuity.  That’s why we’re thrilled to announce Rubrik Andes 5.2, which helps enterprises modernize data protection and management across on-prem, edge, and cloud environments. Our newest enhancements and features help customers improve performance, and facilitate security and compliance while minimizing complexity at enterprise scale. Here’s a rundown of what’s new in Rubrik Andes 5.2: Blazingly Fast VMware performance Rubrik Andes 5.2 introduces VMware multi-node data streaming, improving backup speed by up to 5x for large, multi-terabyte VMs. For VMware vSphere, our latest enhancements improve backup and restore speed by up to 3x for any size VM. Now resilient to network glitches, Rubrik helps ensure your VMware backup or recovery is completed faster than ever with embedded resiliency. The release also introduces the ability to restore from an archived VMware VM to any VMware VM.  Flexible…
Rubrik -  - Exploring Passive Survivability: Bracing for a Cyber Attack

General Tech

Exploring Passive Survivability: Bracing for a Cyber Attack

Security attacks continue to be on the rise as threats like ransomware grow more mature. Many enterprises find themselves unprepared for an attack, with more organizations opting to pay ransom than ever before. This is because recovering from an attack is often time-consuming and complex, and in many cases, the backups themselves are compromised. Although preventing ransomware attacks may seem near impossible, there are tools and infrastructure best practices that can help you build an effective ransomware remediation plan to ensure cyber resiliency. In an article with Infosecurity Magazine, Robert Rhame, Director of Market Intelligence at Rubrik, explores the passive survivability model and how this framework can enable your team to bounce back from a successful attack. Let’s take a quick look at this model and how, according to Rhame, it can prepare your team.  A version of the below excerpted article originally appeared in Infosecurity Magazine. Design Your Infrastructure for Ransomware Resiliency When it comes to preparing for a threat that you can’t stop, your infrastructure must be designed in such a way that an attack, although damaging to your business, does not cause all of your operations to sink. Like a modern battleship, your infrastructure should be created…
Rubrik -  - Better NAS Backup RPO with Changelist Snapshots


Better NAS Backup RPO with Changelist Snapshots

Although NDMP has long been the de facto approach for protecting NAS data, some limitations have led Rubrik to choose a different approach from day 1, leveraging NFS and SMB instead.  To ensure that customers are able to reliably protect their data and recover quickly from data corruption and data loss, Rubrik takes an approach that obviates the need for complex NDMP implementations. Three-phase Approach NAS protection with Rubrik utilizes a three-phase approach. Each phase is optimized using modern techniques such as snapshot API integration, data partitioning, and parallel file streaming. The Rubrik three-phase approach for NAS protection includes: Scan: Rubrik identifies which files need to be protected for full or incremental backups. Fetch: Rubrik takes the list of files from the Scan phase and reads them over the NAS protocol. Copy: Rubrik compresses, encrypts, and writes the data to either the Rubrik cluster or, using NAS Direct Archive, to an archive location on-premises or in the cloud. NAS Data Scan File scanning has historically been the biggest bottleneck for NAS backup performance. This challenge has only grown as the amount of data has grown. Previous approaches, such as image-level backups, have increased backup performance by avoiding file scanning altogether.…
Rubrik -  - How to Create a Successful Data Lake

General Tech

How to Create a Successful Data Lake

Data-driven decision making is transforming how businesses and IT operate. As organizations look to access all types of information, they have carved out a need for higher-level infrastructure experts who help unlock new value from their data. The modern-day DBA has an opportunity to be an in-house expert and operate as a strategic business partner for managing this data and ensuring it is available to those who need it. To do this, in addition to building up their cloud and DevOps skills, many DBAs are delivering on this opportunity by turning to data lakes, a large repository into which data—in its raw, natural form—flows from many sources. Users across an organization can then access and analyze the centralized data. The true power of a data lake shines when you can maximize adoption across your enterprise so that big data informs as many business decisions as possible. To create your own data lake, you’ll need to decide on platforms and data sources, but, most importantly, you’ll need to determine how you can present the data lake to stakeholders to increase adoption across the organization. What is a Data Lake & Do You Need One? Enterprises in all industries and of all…
Rubrik -  - 5 Signs You’ve Outgrown Your Legacy Data Management System

General Tech

5 Signs You’ve Outgrown Your Legacy Data Management System

Despite all the talk about “data-driven decisions,” one thing is often neglected: making sure the data is actually ready to “drive.” Too many companies continue to cling to their legacy IT systems, even when those systems no longer fit the business. Outdated data management solutions require IT teams to spend valuable time maintaining and troubleshooting, and also prevent organizations from getting the most out of their data. On top of this time-consuming management, relying on legacy technology can increase vulnerability to serious risk factors—from customer dissatisfaction to security breaches. So, how can you tell that you’ve outgrown your legacy data management system and that it’s time for a change? Here are five strong signals. Unnecessary Complexity Has You Spinning Your Wheels Legacy data management often entails a complicated, multi-tiered architecture that results in siloed data and disorganization. As a result, maintenance requires a lot of manual work. These are some of the signs that you’re mired neck-deep in unnecessary complexity: Your team spends time devising workarounds for software or designing workflows specifically to accommodate a legacy technology’s capabilities. Your team has created a Frankenstein-like combination of supplemental software to make up for the shortcomings of your legacy system
. Your team…
Rubrik -  - LKAB Avoids Millions in Downtime & Drives Operational Efficiency with Rubrik


LKAB Avoids Millions in Downtime & Drives Operational Efficiency with Rubrik

Rubrik is honored to have received the Best Data Security and Data Protection award at VMworld 2019, marking our eighth VMworld award in just four years. This win was in partnership with our customer LKAB, a leading Swedish mining company, and recognizes the automation, increased efficiency, and significant business value they achieved with Rubrik Edge. Winners of the award were selected “based on the business benefits, levels of innovation and best practice their projects demonstrated.” Enabling our customers to tackle the toughest enterprise data management challenges and unlock new value from their data is what drives us. That’s why, in celebration of our Best of VMworld award, I want to share LKAB’s story and the business benefits they’ve realized by deploying Rubrik.  Modern Data Management for Modern Operations LKAB is a state-owned mining company that produced 26.9 Mt of iron ore products in 2018 with ambitious plans to increase production by 5% per year through 2021. Due to the modern technology and high cost of running their mines, IT is a crucial component of their operations and supports business-critical applications, as well as their entire global employee base.  As Robert Pohjanen, IT Architect at LKAB, explains, “Our small IT department…
Rubrik -  - Why NDMP Is Not the Answer to Your NAS Protection Challenges


Why NDMP Is Not the Answer to Your NAS Protection Challenges

The Network Data Management Protocol (NDMP) is currently the de facto approach for protecting NAS data. However, an increasing number of enterprises are exploring new approaches as their environments grow and the challenges and limitations associated with NDMP come to the surface. A Short History of NDMP More than two decades ago, NAS pioneer NetApp and backup vendor Intelliguard collaborated to solve an issue that was increasingly vexing NAS users – the inability to reliably protect their data. Up to that point, NAS platforms such as NetApp Filers were being backed up by having backup servers mount NAS shares and then moving the backup data to locally attached tape devices or a networked tape library. This solution was fraught with problems ranging from low performance due to the need to read every file over a POSIX interface, performance bottlenecks created by sending data over a single mount point, and managing multiple devices. NDMP was first proposed by NetApp and Intelliguard in 1995 to address these challenges in protecting NAS platforms.* NDMP is officially defined as an “open standard protocol for network-based backup for network- attached storage.”** The protocol specifies a separation of the control path from the data path. Control…
Rubrik -  - Rubrik Archive Consolidation for Lower RTOs and Storage Costs


Rubrik Archive Consolidation for Lower RTOs and Storage Costs

Most IT admins prefer incremental backups to full ones, as they reduce backup windows, storage costs, and unnecessary redundancy. However, incremental backups can be tricky to manage, particularly as admins look to safely expire snapshots past their retention date. Since incremental snapshots are incremental relative to previous backups, they form “snapshot chains” that increase in length until the next full backup is taken. As incremental snapshot chains grow, they become slower to recover and costlier to store. With legacy data protection systems, keeping snapshot chains short requires periodic full backups, which are inefficient and costly and can impact SLA compliance.  Challenges of Shortening Snapshot Chains With the legacy approach, an incremental snapshot could be past its expiration date but forbidden from expiring if there are newer incrementals in the chain that require it for recovery. When recovering from an incremental snapshot, the recovery aggregates the data of that incremental with all previous incrementals in the chain, plus the first full snapshot, which anchors the chain. As a result, the chain cannot expire until every snapshot in the chain has expired. This method requires chains to grow increasingly long, which in turn enforces higher RTOs and storage costs. Simplified Archive Management…
Rubrik -  - Data Protection for DataStax Enterprise Search Indexes and Databases


Data Protection for DataStax Enterprise Search Indexes and Databases

DataStax supports a popular feature called DSE Search as part of DataStax Enterprise (DSE). DSE Search allows you to find data and create features like product catalogs, document repositories, and ad-hoc reports. It uses Apache Solr in the backend to enable search operations on any existing DSE table by creating a Solr index in DSE. In the event of data loss, administrators need to restore the DSE data and then rebuild the indexes from scratch, a process that  can take days. Using Rubrik Mosaic™, you can now backup and restore DSE Search indexes and their data at wire speed, saving you days of application downtime. Let’s dive deeper into how it all works. At its core, DSE Search comprises the DSE Enterprise database, Apache Solr search interface, and Apache Lucene engine for indexing and search. When enabled, DSE Search indexes the data distributed on each Cassandra node using Solr and Lucene libraries. Each search node maintains the highly-optimized search index of data stored on that node. The search indexes are stored alongside the data in the Cassandra data directory. These indexes are built incrementally over time as new data is written to the Cassandra node. In order to utilize the…