SolarWinds: Before jumping on the HCI bandwagon

HCI solutions are incredibly popular right now. This popularity could, in part, be because of the vendors billing HCI as the ultimate easy to manage plug-and-play data centre strategy. In reality, HCI is not the remedy for all data centre ills and requires oversight, says Sascha Giese, head geek, SolarWinds

Deploying HCI doesn’t automatically guarantee the improvement of the database management experience. Success hinges on the ability to scale and apply the solution where it will be most effective, says Giese.
Deploying HCI doesn’t automatically guarantee the improvement of the database management experience. Success hinges on the ability to scale and apply the solution where it will be most effective, says Giese.

The silos dividing compute, networking and storage have long made the task of managing data centres a challenging one. This simple fact explains the popularity of hyperconverged infrastructure (HCI)—a simplified IT framework providing a view of computing, storage and networking in one hypervisor.

HCI solutions are incredibly popular right now. In 2018, sales of hyperconverged systems grew 57.2% year on year, generating $1.9 billion in revenue and making up 46.5% of the total converged systems market.

This popularity could, in part, be due to the vendors billing HCI as the ultimate easy to manage plug-and-play data centre strategy. Taking every vendor’s claim at face value would leave you believing HCI is the remedy for all data centre ills.

Related Articles

In reality, HCI still requires oversight. Applying the solution requires thought and careful consideration of what can and cannot be achieved. 

Hard Truths

HCI’s big selling point is reduced complexity. Having one vendor roll servers, applications, virtual machines, and so on into a single user interface acts as an antidote to the siloed nature of databases. The idea is for the increased simplicity to reduce the domain knowledge needed to manage databases, making such management more accessible to an organisation’s workforce. 

This appealing simplicity, however, feeds into some of HCI’s biggest downsides. 

By design, HCI is a one-size-fits-all cookie-cutter solution, meaning HCI often can’t scale to the unique needs of individual database applications. The appliance footprint might simply be too big for the virtual machines or other workloads being migrated to them. In this case, you may have only needed one other host, but the smallest hyperconverged appliance may represent four or more hosts.

HCI is so simple to use in part because virtualisation, compute and storage resources can all be bought from a single vendor and applied to a reference architecture specific to the vendor. Organisations may initially welcome the absence of multiple supplier relationships, but things can turn sour when the vendor raises the price or too slowly adapts to emerging technologies. Organisations should be conscious that opting to work with a single vendor leaves you locked in.

Getting things right

Hyperconvergence offers agility, availability, and scalability, but there’s no benefit in providing resources if it’s impossible to consume those infrastructure services and convert them effectively and efficiently into a complete application solution of the same frequency and magnitude. Or, in other words, make sure you have a rock-solid use case and understand when the traditional trinity (computer/network/storage) or even a cloud is more beneficial, as HCI isn’t always the best solution.

Careful consideration before deploying HCI can make the solution’s success more likely. The process of implementing HCI should always begin with a clear integration plan. Understanding IT departments’ personnel and various dependent business units’ requirements, including delivery tempo and workflow process, is vital in understanding what hardware is needed now and what can be scaled later.

Once HCI is implemented, determining how and when to scale is important. The linear scaling capability inherent with hyperconverged solutions (i.e., if you can do X amount of transactions on one unit, you can do double that on the next unit), puts the onus on you to have a firm understanding of your applications, to appropriately scale. This kind of firm understanding can only be established through comprehensive network monitoring. 

Visibility is key

HCI might transform server or storage architectures, but the age-old need to efficiently monitor these architectures remains the same.

The application performance management associated with HCI requires visibility across the whole stack. Many presume, as the vendor provides the entire HCI infrastructure, their monitoring solution will suffice. The fact is, however, these solutions often don’t bridge the gap between the view offered by the HCI vendor and the existing view of the wider infrastructure. Applying an effective monitoring toolset, in addition to the vendor’s monitoring solution, will provide the granularity and insight needed to understand the baseline of application performance.

Poor visibility can negatively affect the end-user experience—as is the case with any database. What differentiates HCI, however, is how a lack of visibility can lead to over-provisioning of resources and high capital expenditures. For example, organisations often purchase more HCI infrastructure than they need. Without full visibility, the organisation will never know they’re paying over the odds for more HCI than they need. Nobody wants to make this costly mistake. 

Route to success

Managing data centres is complex and time-consuming—it is no wonder that a solution to simplify the processes is very popular. Deploying HCI doesn’t automatically guarantee the improvement of the database management experience. Success hinges on the ability to scale and apply the solution where it will be most effective. By implementing far-reaching monitoring, organisations can achieve success by realising the potential of HCI.

Most Popular

AWARDS