Hardware suitable for cloud environments
Key to Success
AMD Instead of Intel
The concrete hardware recommendations for compute hardware are therefore clear: Current AMD Epyc CPUs not only deliver more cores for less cash, they perform better than their Intel counterparts by virtually all current benchmarks. The 2U systems are standard, and the only storage in the compute nodes should be two small SSDs on which the system is stored.
Today, all common clouds realize storage for virtual machines by network-connected storage, so that further local storage is unnecessary – at least if you follow the recommendations in this article and do not use your SDS in a hyperconvergent way.
Ratio of RAM to vCPUs
Many admins send off their servers into the fray with massive amounts of RAM and never even think about figures below 512GB per machine. Whether this is useful and necessary depends on the number of vCPUs per server and the assumed breakdown.
Assuming that two CPUs with 64 physical cores each are available for every two 1U, if you subtract eight cores, 120 physical cores remain, which translates into 240 cores thanks to hyperthreading. Assuming an overcommit factor of 4, a total of 1,000 vCPUs would be available in the two 1Us. With 512GB of RAM, the rough calculation gives you 2GB of RAM per vCPU, which, although OK, is pretty much at the lower limit. One terabyte of RAM would therefore be the better option in such systems. A ratio of 1:4 is typical.
Of course, customers do regularly ask for other breakdowns, but you need to pay attention: If a customer wants to use systems with a few vCPUs and a large mount of RAM for particularly memory-intensive tasks, the remaining vCPUs of the system on which these VMs run cannot be used – they no longer have RAM available.
The problem, known as waste, can become a real threat to the entire business case. Companies regularly try to avoid such special requests through pricing (i.e., by marketing the storage-heavy hardware profiles at a considerably higher price than those with the desired ratio of vCPUs and RAM).
Sensible (Re-)Route: Cluster Workstations
Almost every virtualization solution requires a number of services to run smoothly. OpenStack is a good example of this, regardless of whether you buy a distribution from Canonical or Red Hat: Domain Name Service (DNS), Network Time Protocol (NTP), and other services are mandatory.
In my experience, it has proven to be sensible to run these services on a separate high-availability (HA) cluster, which needs to be equipped with dual multicore CPUs and have at least 256GB of RAM and several terabytes of disk space to run multiple VMs for different services, if necessary.
At this point, compliance usually comes into play because different rules apply to external, as opposed to internal, connections. VMs must then be operated twice, depending on the target.
Buy this article as PDF
(incl. VAT)