Photo by Beau Runsten on Unsplash

Photo by Beau Runsten on Unsplash

Clustering with the Nutanix Community Edition

The Right Track

Article from ADMIN 67/2022
By
The free Community Edition of the Nutanix hyperconverged infrastructure, Nutanix on-premises cloud, is offered alongside its commercial product for those looking to take their first steps in the environment.

To be clear, the Community Edition of Nutanix was developed for testing purposes only; it is not a replacement for the production version. The Community Edition does not give you all the possibilities that you have with the commercial version. For example, the Community Edition only supports two hypervisors: Acropolis (AHV) by Nutanix and ESXi by VMware. The basic setup of a private enterprise cloud from Nutanix built on the Community Edition includes the hypervisor, the Controller Virtual Machine (CVM) and associated cloud management system, the Prism element for single-cluster management, and Prism Central for higher level multicluster management (Figure 1).

Figure 1: A schematic representation of the components of the management environment.

With the Community Edition, you can set up a one-, three-, or four-node cluster. All other conceivable cluster combinations are reserved exclusively for the commercial version. The individual components of the Community Edition, such as AHV; the AOS cloud operating system, which is based on the individual CVMs in the cluster; and the cloud management system, cannot be mixed with components of the production version. Therefore, you cannot manage a Community Edition cluster with Prism Central from the production version. Conversely, you cannot use Prism Central Community Edition to manage a production cluster.

If you want to use VMware's ESXi in the Community Edition as your hypervisor, also remember that you will then not be able to use the Nutanix Flow microsegmentation functionality because it can only be used in conjunction with AHV.

Everything's Connected

During the installation and subsequent testing of the Community Edition, it can be quite useful to switch to the command line from time to time. To do this, you need to know how to find your way around the network and which module and service you can reach. Regardless of whether you are using AHV or ESXi, you always have at least two networks: an internal network that is not connected to a physical network adapter and an external network to which the existing physical adapters are connected. The internal network is used to support communication between the CVM and the hypervisor. The 192.168.5.0 network is used for this purpose.

The hypervisor always has the IP address 192.168.5.1 and the CVM the IP address 192.168.5.2, which means the installation process always creates two virtual bridges or virtual switches for each node in the cluster. If you use AHV, you will find vir br0 and br0 in the node, which for ESXi are vSwitchNutanix and vSwitch0 .

You assign external IP addresses to the CVM and the hypervisor during the install. If you now want to access the console of the AHV, you can either address it on the external network or the internal network. The same applies to the console of the CVM: You can access the CVM console from the external or internal network (Figure 2).

Figure 2: Different approaches lead to the CVM and hypervisor consoles.

Table 1 provides an overview of the accounts you can use to access the system, including the root login name for accessing the console on the hypervisor and the nutanix login name for the CVM console, along with the matching password nutanix/4u , which you also need to log on to the respective consoles.

Table 1

Nutanix Usernames

Component Protocol Password Username
Controller VM SSH nutanix nutanix/4u
AHV SSH root nutanix/4u
ESXi SSH root nutanix/4u
Prism Element HTTPS (port 9440) admin nutanix/4u
Prism Central HTTPS (port 9440) admin nutanix/4u

Installation Media

In the first step, you need to create an account with Nutanix [1] and register your email address by following the Get Access Today! link. After you have completed the registration process, you have a personal Nutanix account and are now authorized to log in to the portal [2].

On the Nutanix Community Portal site you will see the Download Nutanix Community Edition block. After clicking on this, the Community Edition download site pops up immediately, and you are treated to an initial overview of the binaries available to you there. At press time, version CE-2020.09.16 was available. Because a new production version was recently released (AOS LTS 5.20 and AOS STS 6.0), it can be assumed that a new Community version will soon follow.

To install the Community Edition (CE), you need to download the corresponding ISO file (CE-202y.mm.dd.iso). You can use this image to install the CVM and AHV on your nodes in a fully automated process. If you would rather use ESXi as the hypervisor in your Nutanix Lab cluster, you also need the image of the vSphere hypervisor (ESXi ISO).

If you want to install and set up Prism Central after your cluster has been installed and configured, you need the matching binary (i.e., the Prism Central Deployment file) in the form of a TAR archive and the Metadata for AOS upgrade and PC deploy/upgrade file as a ZIP file. In addition to the JSON files for upgrades, the latter also contains the ce-pc-deploy-202y.mm.dd-metadata.json file, which you need to install Prism Central.

Next, download the VirtIO drivers and, if you want to try out End User Computing (EUC) or Virtual Desktops (VDIs) on the Community Edition, the matching plugins. In the Documentation and Guides section on this site you will also find bundles of additional documentation on the Community Edition in the form of PDFs and video files.

Installation Preparation

You have now downloaded all the software you need. The question that remains is how to install the Community Edition: physically or virtually (i.e., in a nested setup)? You also need to decide whether you want a one-, three-, or four-node cluster. No matter what you ultimately decide, the installation procedure is always the same. In the first iteration I take a look at creating a one-node cluster lab based on the Community Edition with Nutanix AHV as the hypervisor, Prism Central for multicluster management, and an Intel NUC (Next Unit of Computing, a small-form-factor barebone computer) mini-PC as the hardware platform.

The NUC used in our lab is the NUC8 i7 BEH model. It comes with two 32GB DDR4-2666 SO-DIMMs (i.e., a total of 64GB of RAM). The computer has two disk drives – one 512GB and one 1TB SSD – and an eighth generation quad-core Intel Core i7 8559U processor running at 2.7GHz. Therefore, the machine is not totally up-to-date but is still perfectly adequate for the lab. If you do not have a machine like this at hand for your installation, use something with similar hardware, or if you are going for a nested setup in your lab, use something with similar specs for the VM.

To avoid wasting time while installing your lab setup, you should have all the necessary information ready in advance: a DNS server, a default gateway, at least two Network Time Protocol (NTP) servers, and – if you want to connect your lab to Active Directory – access credentials. You also need an IP address from your lab network for the CVM, another for the one-node-cluster itself, one in case you want to provide an iSCSI target with Nutanix volumes, another IP address (for the hypervisor, of course), and yet another IP address for Prism Central. (See the "IP Addresses for Larger Clusters" box.) Additionally, you need unique names for the Nutanix cluster and for Prism Central.

IP Addresses for Larger Clusters

If you are more interested in installing a three- or four-node cluster, remember that you will need separate IP addresses for each individual hypervisor and CVM that resides on your cluster's nodes.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus