For the past 7 years now, Hyperconverged Infrastructure (HCI) has been gaining traction with customers with a on-premises data center requirement for their business. 2018 was a watershed moment in that more customers than ever chose to replace their existing “Compute + Storage + Hypervisor” architecture with HCI of some flavor. Now that we have some miles under our belt with HCI, let’s look deeper into the scenario’s where HCI does well (the Good), the situations where HCI could stand to improve (the Bad), and the scenarios where HCI just isn’t a good fit (the Ugly).
The Good
Here at Elevate Technology Partners, we have been a part of several HCI deployments with various technology vendors and one thing remains true with HCI: HCI should be the default architecture for the majority of customers. HCI delivers on simplifying not only the buying process (fewer things to buy and integrate), but also the “day 2” needs of management and maintenance of the solution.
Additionally, HCI is the closest architecture that the on-premises world will get to operating like a public cloud; you scale as your workload demands it and pay accordingly. When we lived in the world of tying together differing compute and storage technologies together, customers would often buy several years of growth upfront to offset the cost and (in some cases) technical challenges of expanding these platforms as time went on; with HCI all of this is a thing of the past. HCI enables a customer to buy only the unit of computing that needs to be expanded when it needs to be expanded.
As it pertains to workloads, the HCI world of 2019 can support nearly any x86-centric workload you can imagine without giving up much in the way of availability or performance. Like any architecture there are workloads that are a better fit for HCI, and in no particular order, those would be General Virtualized Servers (or GenPop as is often called) , Virtual Desktop (VDI), what I’ll call “mid-size” database environments that have less than 5TB of data and less than 16 vCPU’s per VM.
The Bad
As outlined, albeit briefly, above HCI fits the bill for many customer needs. That said, it isn’t a panacea and just like all other computing architectures before it not every customer need can be solved by it regardless of what the various HCI vendors say. To that end, lets talk about the areas where HCI could stand to improve.
The most common issue that we have experienced with HCI when sizing solutions for customers occurs when a customers’ compute workload is not proportional to their storage need. Admittedly, this is a very subjective situation as every customer is different, but I’ll try to quantify it. Say a customer has 26 CPU’s (sockets) in their environment, but their storage need is 700TiB of storage. The challenge with this scenario is in keeping the cost of HCI in check with the imbalance between storage and compute requirements; make sense? Even with vendors that support “storage-only” nodes, many times the customer is giving up something (e.g. simplified day 2 operations, performance, supporting a single operating system, etc.) to support their need.
Another architectural challenge experienced with HCI occurs when a customer has a non-virtualized server (might be x86 or RISC architecture) that they desire to connect into their HCI environment. Usually this is around providing block storage to the non-virtualized server workload in question. Candidly, very few HCI vendors can support this requirement and even then, it requires a thorough understanding of not only the solution requirements, but also the customer use case to ensure this can be done without negatively impacting the customer’s environment. The key to supporting physical server workloads is to provide software-defined storage from the HCI solution, without putting too many limitations on the operating system (example: only supporting certain patches or versions) and ultimately impacting the support of the applications that reside on these servers.
The Ugly
Just like any computing architecture, there are scenarios or situations where HCI just isn’t a good fit for the use case or situation. The reasons why HCI might not be the best fit vary, but the most popular reasons tend not to be about technology, but around personnel and business process.
When we look at the personnel or people perspective, if HCI is to be deployed in an environment with established teams that aren’t cross-functional (example: dedicated storage teams or dedicated server teams). By its very nature, deploying an HCI solution will force cross-functional behavior within an organization and if that organization isn’t willing to embrace this change then HCI will fail to be adopted.
There are also scenarios were business processes, specifically software license structures, create challenges with the adoption of HCI. Typically, these vendors license their software by the total cores in the cluster and not by the physical sockets or CPU’s. Additionally, some vendors require that all cores in the cluster that could run the workload must be licensed; in situations where CPU & storage aren’t proportional, this makes the total cost of deploying HCI prohibitive.