By Matt Shore
We used to call them “monster VMs”—maybe because they were big, hungry, hard to manage and could sometimes leave destruction in their wake! All joking aside, we’ve come to recognise that resource-hogging, mega-VMs are becoming a big and expensive issue in data centres.
What do we mean by the term ‘mega-VM’? In a hyperconverged environment, they’re what happens when storage and compute needs are out of sync. Maybe an application needs massive storage but not so much compute, or super performance but not much storage.
This kind of mismatch is a big problem when storage and compute resources can’t be scaled independently. When this happens, our customers are finding that HCI is delivering a fraction of what they’d hoped for. Instead of 15-20 compute VMs per node, for example, they may only be able to fit three or four mega-VMs—and the matching storage capacity sits idle.
This is not the super flexibility and efficiency that HCI promised. The idea of consolidating storage, networking and computing resources into one ‘mega-server’ that is simple and quick to scale has been great for many applications—but definitely not all of them. As a result, data centre managers must often spend time reallocating resources, moving VMs to different nodes or changing limits on certain applications. One misbehaving ‘monster’ can cause a slowdown, or worse, in the applications with which it shares resources.
We’ve identified three different kinds of mega-VM. The first are mission-critical applications like Oracle databases or SAP HANA installations that need their own pool of I/O. The second type are applications with high peak requirements at predictable times—for example, reporting software at the end of a quarter, or a large payroll-processing application.
Finally, we find mega-VMs in a test or development environment, especially now that developers are using AI and ML. These VMs often require peak I/O at unpredictable intervals. AI and ML require fast access to large datasets to produce the results we prize: insights into customers, products and the market.
In all these cases, traditional HCI can lead to expensive (and wasteful) overprovisioning.
HPE HCI 2.0 does away with these inefficiencies. Unlike conventional HCI that runs on off-the-shelf hardware, HCI 2.0 is disaggregated. With HPE ProLiant servers and HPE Nimble Storage dHCI, running together under smart VM-aware management software called HPE InfoSight, data centres can be managed and scaled according to their own requirements. If more storage is needed, HPE Nimble Storage units can be plugged in and made ready for apps within 15 minutes. If more compute is required, HPE ProLiant servers added to the network auto-configure with little oversight required.
HPE InfoSight can be configured to ensure that resources are allocated, where and when needed. VMs can be moved automatically, and seamlessly, from one node to another to make allowance for a mega-VM with a ‘hungry’ application, or when InfoSight detects a maintenance issue like an imminent drive failure.
HPE InfoSight uses systems modelling, predictive algorithms and statistical analysis to solve storage administrators’ most difficult problems, ensuring that storage resources are dynamically and intelligently deployed to satisfy the changing needs of business-critical applications.
At the heart of HPE InfoSight is a powerful engine that applies deep data analytics to telemetry data that is gathered from HPE Nimble Storage arrays deployed across the globe. More than 30 million sensor values are collected per day for each HPE Nimble Storage array. The HPE InfoSight engine transforms the millions of gathered data points into actionable information that helps customers realise significant operational efficiency.
This deep learning saves valuable hours and resources from being wasted on troubleshooting: if a problem cannot be automatically resolved by HPE InfoSight it is automatically reported to HPE support, who will provide Level 3 support remotely.
HCI 2.0 from HPE revolutionises the data centre, making your business ready for the demands of compute- and storage-hungry mega-VMs, and opening the path for more efficient, and productive, data operations.