By Cassandra Balentine
Hyperconverged infrastructures (HCI) allow for a total infrastructure management solution. These solutions combine critical infrastructure elements including computing, storage, and network to consolidate, automate, and simplify operations.
HCI streamlines the deployment, management, and scaling of datacenter resources by combining server and storage resources with intelligence software, according to HCI provider, Nutanix. “Separate servers, storage networks, and storage arrays can be replaced with a single hyperconverged solution to create an agile datacenter that easily scales with your business,” notes the provider’s website.
Components and Capabilities
In an effort to streamline datacenter resources, HCI provides an all-in-one solution.
Nutanix says most HCI solutions consist of two fundamental components, a distributed data plane and a management plane. The company’s website explains that a distributed data plane runs across a cluster of nodes delivering storage, virtualization, and networking services for guest applications like virtual machines (VMs) or container-based applications (apps); while a management plane enables easy administration of all HCI resources for a single view and eliminates the need for separate management solutions for servers, storage networks, storage, and virtualization.
Ariel Maislos, CEO/founder, Stratoscale, adds that true HCI comprises of minimum compute, storage, and networking capabilities. “For many customers, the key element is the software-defined, built-in capabilities. We also see a strong demand for decoupling the solution from the underlying infrastructure and eliminating a vendor lock in.”
HCI emerged by extending the consolidation value of compute virtualization to shared storage. “By tapping into rapid advancements in CPU, flash memory, and low latency Ethernet technologies, a scale-out HCI system based on modular X66 server components delivers aggregated, protected, and integrated shared storage at a fraction of the cost of separate legacy SAN/NAS arrays,” shares Lee Caswell, VP storage and availability products, VMware. “Even more importantly, HCI offers a new agile operation model where data services are managed along with server resources as combined attributes of virtual machines or containers, rather than as separate silo resource pools.”
From this compute and storage beginning, HCI expanded the application-centric operational model to include virtual networking, cloud resource access, and unified management of automated monitoring, alert generation, and application blue printing. “The promise of HCI is that this complete compute, network, and storage stack can be delivered on any hardware—from the edge to the core to the public cloud,” offers Caswell.
Sushant Rao, senior director of product and solutions marketing, DataCore Software, points out that the difference between a virtualized cluster with a storage area network (SAN) and a virtualized hyperconverged cluster is the elimination of the physical SAN. They both have virtualized compute. Networking may or may not be virtualized but it’s not the difference between a cluster being hyperconverged or not. “So, the key element of an HCI is the sharing of direct-attached storage (DAS) across a cluster of hosts.”
Roa adds that for many organizations, the availability of business-critical data is important. “Due to this need for data availability, virtualized hosts require access to storage that contains the VMs. Typically, they achieve that access via an external SAN or network-attached storage (NAS) device,” he explains. “However, these types of devices are a single point of failure. If the SAN or NAS fails or becomes unavailable, all of the VMs and hosts then go offline.”
HCI addresses this problem by using the DAS devices in the hosts. “Data availability involves implementing data redundancy and maintaining accessibility even in the event of a host failure within the cluster due to hardware malfunctions, site failures, regional disasters, and user errors. HCI achieves this by using software to abstract the direct attached storage in each host, pool it, and present it as a virtual shared storage to the hosts,” says Roa. The software then replicates the data across all hosts in the cluster so that if one host were to fail, the data is still available on other hosts and the VMs can be restarted on a different host to maximize data availability and application continuity.
Compared to traditional converged infrastructure, Craig Nunes, VP of marketing, Datrium, says HCI simplifies deployment. And by replacing LUN-based management with VM-centric administration, ongoing management is also easier.
At a minimum, HCI aggregates, protects, and offers unified management of storage resources and data services across servers that reconnected by Ethernet and that offer simultaneous virtual compute resources. Caswell says HCI systems can generally start small with as few as two nodes and scale out to tens of nodes.
Several organizations potentially benefit from HCI, including small data centers, application clusters, and remote office and branch office (ROBO) environments.
Rao says ROBO environments are remote sites that sometimes have power, cooling, and space efficiency (PCSE) requirements so the infrastructure must be small and highly available.
Similarly, small data centers only have a few hosts, so consolidating the central SAN into the server tier improves PCSE. Also, instead of running a variety of applications in one cluster, a dedicated application cluster with hyperconverged makes it easier to manage the performance and availability of that application, points out Rao.
Maislos also believes data centers are prime candidates for HCI, particularly those that own vast amounts of legacy infrastructure that would otherwise be considered obsolete for non-HCI solutions. “The ability for any organization to reuse its commercial off-the-shelf servers in a new and significantly better solution makes the organizations an ideal candidate for HCI,” he comments.
Virtual desktop infrastructures (VDI), file and print services, and databases are common applications that run on hyperconverge clusters, according to Rao.
Nunes adds that organizations deploying VDI, single-use workloads, or edge deployments can take advantage of the benefits of HCI. “In each of these cases, deployment of homogeneous HCI nodes with VM-centric simplifies administration and can speed project ROI.”
Caswell states that the primary decision maker for HCI is the infrastructure designer, not the traditional storage administrator. He points to a recent study of VMware vSAN users and shows the buying pattern of the director of IT/infrastructure is responsible for deciding to implement HCI at a company, followed by vSphere administration/team, executive-level, storage administration/team. “The CAPEX and OPEX savings of HCI solutions over separated compute and legacy storage appeal to organizations of all sizes, from small organizations relying on IT generalists to large organizations seeking operational savings and a technological advantage over the competition. Organizations frequently start with small scale deployments to characterize HCI and then quickly expand their HCI footprint once they get a sense of the agility that HCI offers to businesses struggling to keep up with the pace of digital business.”
The limitations of HCI include lack of performance and the inability to scale capacity and compute independently, integration with other areas of the data center, and limited choices.
Rao says today’s demanding applications require high performance storage to provide the low latency needed from these enterprise workloads. “For a variety of reasons however, HCI typically does not provide a consistent, high-level performance to high I/O applications.”
To combat this, he says vendors require the addition of flash as the default storage media in the hyperconverged cluster. “However, this results in higher costs and still may not provide the performance needed.”
Unlike legacy storage systems that are often narrowly suited for individual workloads, Caswell points out that HCI tends to support a mixed set of applications with varied I/O requirements. “With added flash performance, there are a few applications not suited for HCI,” he shares.
In terms of scalability, data and compute growth do not usually go hand in hand. “For many companies, data is growing at a much faster rate than compute, which means that additional storage capacity will be needed sooner. With a traditional hyperconverged system, that storage is typically added via additional nodes, but this also requires adding compute capacity, which is then wasted,” says Rao.
Despite early success with VDI, single-use clusters, and edge deployments, HCI has not translated to success in larger data centers says Nunes. “The fundamental limitation with HCI architectures are a result of both I/O processing and durable capacity, existing together in the HCI server node. This architectural tenet has led to challenges in supporting scalable, missing-critical, low latency workloads, a core requirement for multi-cloud enterprises if they are to effectively replace their array-based infrastructure with HCI,” he explains.
HCI systems typically mirror data across multiple participating nodes for data availability. “There is a choice at configuration time between keeping two or three copies of data. The reason for a two-copy option is because it is too expensive to store three copies of the same data. However, that cost savings comes at a significant risk of data loss and or data unavailability. The loss of a single done and a single sector read error on the remaining copy can result in data loss at worst, or data unavailability at best,” explains Nunes.
From an administration perspective, many hyperconverged clusters are viewed as data islands or silos with a different management platform from the rest of the infrastructure. “Because of this, there is an added complexity and additional training is often required,” says Rao.
With HCI it is not uncommon for vendors to set integration limitations. “Some vendors limit the choice in hypervisor, while others limit hardware choice. This results in vendor lock in, which reduces flexibility and increases costs,” adds Rao.
Maislos says the birth of the public cloud introduced a new paradigm and a fundamental shift happened in the way infrastructure is thought about. “As the industry grappled with the idea of self service and on demand resources, developer’s demands were increasingly shaped by the ease of use and simplicity of the public cloud paradigm enabled developers to shift their entire focus to business logic, the company’s core IP, and eliminated the need to set up, maintain, and scale all other components,” he explains. “Customers are now focused on transforming enterprise environments to deliver cloud services and consistent DevOps, realizing that virtualization is not a cloud and nor is it HCI. There are layers to be added to such basic features before we approach the functionality of the cloud, and each layer requires its own knowledge and skills to implement, manage, and maintain. As companies are considering the right solution they need to take into account the required abstraction that will result in reducing the complexity of managing these layers.”
Depending on an organization’s needs, various trends drive HCI adoption.
Nunes points out that complexity in deploying and managing private and hybrid cloud infrastructure is one of the key drivers for HCI adoption. “IT professionals are seeking simpler converged solutions, yielding faster, more reliable results.”
One source of complexity comes from array controller bottlenecks and SAN latencies related to the adoption of flash-based media. “Compounding that issue is LUN-based administration, which is at odds with more simple application-centric management. LUN-level objects also force many organizations into buying a dedicated backup platform to achieve VM-level backup and restore,” adds Nunes.
He explains that that while many organizations have turned to converged infrastructure solutions to simplify, these general did not go far enough—being little more than procurement bundles. HCI systems were subsequently introduced to address the complexity of legacy converged infrastructure. “These systems effectively eliminated the array controller bottleneck by moving both I/O processing and durable capacity to the server in a scale-out architecture. In addition, most HCI offerings provide VM-centric management, eliminating the LUN administration burden.”
Rao says the ease of purchase and deployment also drives adoption of HCI. For a new deployment, he says selecting, buying, and deploying both servers and storage can take a fair amount of time. “Instead, with HCI, companies can purchase and deploy an HCI solution in a much faster timeframe.”
Rao also points to the growth HCI accommodates as a driving factor of its adoption. “When the cluster needs to grow, HCI allows for a straightforward process to buy additional nodes to add to the cluster.”
HCI solutions provide greater availability at the lower end. “The smallest virtualized cluster requires two hosts and a SAN. However, low-end SANs do not have the redundancy of the larger storage systems. If a host fails, the other host can run the VMs. But, if the SAN goes down, the entire cluster goes down. With HCI clusters, the other nodes can run the VMs from a failed node,” offers Rao.
Maislos says the promise of reducing operational complexity and achieving a supremely agile data center that is also capable of spinning up hundreds of VMs in minutes, and can easily scale all resources as demand grows, are key drivers of HCI adoption. “For an increasingly number of customers, the driver is also modernizing existing hardware and repurposing based on a software-only and hardware-agnostic HCI solution.”
Caswell says many customers start looking at HCI during storage or server refresh times. For those looking at flash storage upgrades, HCI offers comparable performance at a fraction of the cost. For server customers, it offers an opportunity to speed up IT responsiveness over a traditional three-tiered infrastructure with an operational model that allows quicker alignment to changing application needs.
Organizations face fundamental IT issues, including smaller budgets, rapid technology advancements, and higher demands.
“In this kind of environment, organizations are looking for infrastructure solutions that fit their budgets, are easy to learn, and can scale and adapt very fact—the exact problems HCI solutions are architected to solve,” offers Caswell.
HCI enables a total infrastructure management solution, offering streamlined and simplified infrastructure for the right candidate. SW
May2018, Software Magazine