A Bit of History
I still find technology fascinating – don’t you? There’s a certain joy in tinkering, building, and ultimately creating that never gets old.
This goes back to my earliest days when I used ResEdit to change the Welcome to Macintosh screen to say Be very, very afraid, but it remained true after graduating from college and diving headfirst into learning Novell Netware and eDirectory. For those of us with a CNA, it never actually expires and doesn’t technically have to be removed from your resume. I’ll let you decide if it stays or not though.
Over time, part of my responsibilities both as a data center architect at a customer and then later as a presales technical director at a solutions integrator involved input on new product evaluations. I had to determine how the various products would or wouldn’t fit into my data center or our solutions portfolio.
There are many facets to these kind of evaluations, so it can be daunting to keep track of the mountains of minutia – everything from technical to operational to financial and more.
What’s a Design Center?
In time, one idea that I came to focus on was a concept that I now like to call a product’s “design center.” Defining this involves research in three areas – the combined answers to the questions below are the design center.
1. What was a product originally designed to do?
This involves understanding the Minimum Viable Product as originally released and capabilities added shortly thereafter that were part of the original design but weren’t implemented in the first version.
2. What were the prevailing environments when the product was being designed?
From a data center perspective, there are a lot of possibilities here depending on how granular you get. I personally think of the eras as mainframe, open systems (Windows and Linux), virtualization, and cloud. Depending on the products under evaluation, this can be much more granular.
This question can also be extended to look at prevailing LAN and WAN speeds. For example, gigabit and 10 gigabit ethernet in the data center, much less the than internet has, enabled the creation of distributed systems that never before seemed possible. Latency is sadly still bound by the laws of physics though.
3. What technologies were available as the product was created?
Need a database? Maybe SQL Server on a Windows server or an embedded MySQL server on a Linux appliance is the best option.
Need to handle media designed for offline storage (aka tapes) with fast sequential access but horribly slow random access? Let’s design a whole data movement structure around that.
I’m sure you could add many more examples by digging into the depths of various products and seeing what technologies they’ve used. Windows even uses BSD code for its networking stack.
Every technology chosen is often the best choice at the time but may impose limitations or benefits in unforeseen ways years or decades in the future.
Over time, I came to realize that the further a product moves from its design center, the more complex it becomes. As more and more layers are added onto the core product design, complexity is unavoidable. In addition, design choices that were the best option 5, 10, or 20 years ago can later become huge constraints limiting the speed of feature development and reliability.To put it another way, the closer the design center is to the problems you’re trying to solve today, the simpler the product can be.
The thought process above is one of the things that brought me to Rubrik. When I look at related products, I simultaneously have a lot of respect for them (especially since I’ve personally used many of the products in question) but also can clearly see the vitality and benefit of a current design center relative to products designed in the age of virtualization, open systems, or even mainframe.
What is the design center for your data protection products? If it’s getting challenging to deal with the complexity and lack of capabilities driven by an older design center, let’s talk.