In the second part of my series on Converged Data Management, I’m putting another property under the microscope – Infinite Scalability. The underlying premise is that the fabric that is providing data management can be deployed in a shared-nothing manner with a limitless architecture focused on linear growth. Woah, what does that all mean?


Let’s pick these ideas apart, one by one. A shared-nothing system is one built of a series of nodes that have no dependency upon each other. If a node fails, or parts of a node fail, the fabric remains healthy and operational without any negatively impacting penalties. Ideally, this architecture is expanded beyond the node itself, expanding out to the enclosure, rack, or even entire data centers. Contrast this to systems that are reliant upon dependencies and use alternative tricks to hide or protect them – load balancers, failover clustering, and so forth. If a failure occurs, performance suffers due to the need to ingest data into a central choke point – such as a master server, quantity of proxy nodes, or a database instance. Availability is also put at risk, especially considering that most components in a dependency chain have only a single failover counterpart because of the headache required to manage and maintain the full stack.

The other idea I touched upon was limitless architecture focused on linear growth. Put simply, it means that you can start with some minimum deployment configuration, such as three nodes, and grow in a very predictable manner with realistically static costs. Let’s use a backup storage array as an example – these are typically deployed with two controllers and a finite number of shelves that can be added for capacity. The controllers, software, and licensing are a large investment because you size the controllers for the growth needed in 3 to 5 years (or longer). Sort of like buying a huge semi-truck without any cargo to haul. As your data needs grow, you purchase more shelves to add capacity – adding cargo to your semi-truck. When the controllers have maxed out on the number of shelves that can be attached and your semi-truck is hauling a huge load of cargo, you have to go buy another set of controllers and start over, or fork-lift new controllers to replace the old ones. The challenging part is figuring out what size to purchase for the initial set of controllers (who can realistically see 5 or 7 years into the future?), and determining when to fork-lift versus purchase new and migrate.



That’s not particularly fun, and it’s a pretty big revenue driver for backup storage. With Converged Data Management, the use of linear scale means that each time capacity or performance is needed, new nodes are dropped into the system and contribute to the fabric of resources. There’s no concept of a fork-lift upgrade, because as nodes depreciate off the books or reach their end of useful life, they are simply removed from the fabric as new ones take their place. I’m particularly keen on growing in this manner, as it makes the accounting and technical folks happy.

If you happen to be attending VMworld 2015 in Barcelona, catch the upcoming VMworld session STO6287 entitled Instant Application Recovery and DevOps Infrastructure for VMware Environments – A Technical Deep Dive featuring Rubrik’s Arvind Nithrakashyap, CTO, and myself. Don’t worry, I’m bringing stickers and LEGO kits. 🙂