Home Lab, NSX

Joining Individual VMware NSX Managers to form a Cluster via CLI

I’ve deployed 3 NSX Managers individually from the NSX OVA onto a single vCenter. By having 3 individual Managers, I have the option to create multiple clusters from each one (probably excessive and incorrect in my case). Instead my goal is to join all 3 individual managers to form a 3-node cluster and then assign a VIP.

For this process, I will be following VMware documentation that is provided here: Form an NSX Manager Cluster Using the CLI

My 3 NSX managers I will be referencing and joining are nsxcon1, nsxcon2 & nsxcon3

Here is an example of nsxcon1 UI reviewing the ‘Appliances’ section, you can see there is only a single appliance and an additional one cannot be added until a ‘Compute Manager’ (such as a vCenter) can be added.

I did verify CLI connectivity to each of the appliances by running

get cluster status

This command will return cluster health for the NSX Manager and any appliances that are part of the cluster, for this example, it’s only a single appliance

From the first NSX controller you will want to obtain the thumbprint by

get certificate api thumbprint

That will provide you the thumbprint of the targeted appliance

Moving onto the other node (nsxcon2) which we want to join to nsxcon1, we will use the following command

mgr-new> join <Manager-IP> cluster-id <cluster-id> username <Manager-username> password <Manager-password> thumbprint <Manager-thumbprint>

Here is an example of what it looks like when populated in that command and ran from the node we want to join to our primary one.

*Please ensure you have taken appropriate backups as this will take this node and try and join it to another cluster, being this should be a vanilla install, should not be too much to have to re-deploy.

After a couple of minutes we do receive the following prompt

We can then go back to nsxcon1 and verify with ‘ get cluster status’ and see that the cluster status is ‘DEGRADED’ however this is normal while the node is completing it’s process with joining and updating the embedded database.

We can take our ‘join’ command earlier we used on nsxcon2 and then run it on nsxcon3 again.

After running it, going back to nsxcon1 and checking cluster status..we now have 3 appearing

After a few minutes, our GUI has been fully populated with all NSX Managers reporting as stable

As a cherry on top, we will click on ‘Set Virtual IP’ and assign a dedicated IP address which also has it’s down DNS record.

There is our new virtual IP which has been assigned to one of the nodes

Aria, Home Lab

Deploying VMware Aria Suite Lifecycle Manager with Easy Installer

During this Greenfield deployment of my home lab, I’m going to be rolling with the latest Aria Easy Installer, like its predecessor (vRealize Suite Lifecycle Manager) this too includes initial deployment of the Aria LCM appliance, Aria Automation & VMware Identity Manager.

I’ve performed some pre-requisite work such as reserving IP’s, Forward and Reverse DNS entries, and will be deploying this on a 3-node vSAN ESA cluster. The steps below will be using the Windows UI interface of the installer.

Ensure you check out the latest Release Notes for Aria Suite Lifecycle 8.12 and if you would like to learn more about Aria Suite, be sure to check out VMware’s site: Aria Platform Lifecycle

Click Install

These are the products that are part of the initial deployment of the Aria Suite

The next screen will be to Accept EULA or CEIP (optional)

For the ‘Appliance Deployment Target’ you will want to connect to a vCenter Server, if you want to take additional security measures and avoid using default @vsphere.local accounts, you may create one in vCenter and use that for the association. The following document provides the details on the permissions; VMware Aria Suite Lifecycle: Assign a user role in Center

I will be using an AD account I’ve created and because my LDAP is the default Identity Source, I just have to put the user account and not append domain.

On the next screen select ‘Compute Resource’ and click Next.

For our install we will be installing it on our vSAN datastore

Next screen will be ‘Network Configuration’

For the ‘Password’ Configuration ensure you document everything this password is used for. This is critical for future troubleshooting, lifecycle and if you are doing any kind of password rotation.

Populate information regarding initial appliance deployment for Aria LCM

The next step is the Identity Manager Configuration, there is an option to import a version deployed outside of Easy Installer and there are additional options below regarding syncing Active Directory.

The final configuration is for Aria Automation Part 1

Aria Automation Part 2

The final part will be to kick-off and monitor the installation. You should notice your vCenter will begin deploying VMs.

The status now shows it completed successfully, I had 3 VMs deployed (your results may vary if you configured clustered options for your appliances.

Once completed you can verify accessibility to all the appliances, below is the splash page when logging on to Aria LCM

Home Lab

VMware – Error when using ‘Erase Partition’ on (vSAN) Storage Device – Failed to update disk partitions

I’ve been going through upgrades in the homelab and one of the changes has been to prepare for destructing an existing vSAN cluster and creating a new vSAN cluster. While going through vSAN ESA configuration, disks were not showing as available, I needed to go in and delete the existing partitions for the old vSAN cluster.

When in vSphere from a host and attempting to ‘Erase Partition’ from a storage device, we encounter the following error, below that is the error in Tasks.

SSH into the host and run the following command to verify disks are still part of a vSAN Disk Group

esxcli vsan storage list

In my case, I had all 3 disks appear, the following command removes the disks from the group. First you will need to obtain the VSAN Disk Group UUID

esxcli vsan storage remove -u <VSAN Disk Group UUID>

After running the command it will take you back to CLI prompt and you can confirm the disk group is empty by re-running the first command ‘esxcli vsan storage list’

Go back into the Storage Devices and retry the Erase Partition, I live dangerous so I did all 3 at once 🙂

It completed, validated in Tasks that partitions were updated succesfully.

Let the vSAN configurations continue.

Home Lab

[Video] Upgrading Supermicro E300 to VMware vSphere 8 w/ vSAN Cluster

Alright..dove right in and decided to get my management cluster upgraded to 8 after getting pre-requisites upgraded such as vROps, LogInsight, vRealize Lifecycle Manager & NSX.

My NVME drives in the hosts are not on the VMware vSAN HCL, but thankfully was able to ignore that in the Remediate settings options. I do have TPM 2.0 chips with Host Encryption enabled, so far no errors.

Next steps will be to explore vSAN ESA..

Home Lab, Uncategorized

VMware vCenter 8 Upgrade Step-by-Step – Part 1 – vCenter Upgrade

First step will be to take a snapshot of the vCenter, if you are running Enhanced Linked Mode, ensure you power all vCenters off and take cold-snapshots from the Host UI.

Because the upgrade deploys a new vCenter appliance, we will be renaming our existing VM object from ‘vCenter’ to ‘vCenter_old’

Accessing good old fashioned ui-installer wizard, will be selecting ‘Upgrade’

This will be a various of steps, for Step 1. It will be ‘Deploy a vCenter’, this step is to begin the deployment of a new VCSA (vCenter Server Appliance)

After accepting EULA, the next step will be to ‘Connector to Source Appliance’ this would be the hostname of the VM (not the VM object name in vCenter)

I will then put in the landing vCenter I want to deploy the new appliance too.

For Step 5 you will select a Folder location for the VM, followed by Step 6 which is select a Compute Resource.

Step 7 will ask for the name of the new VM appliance and desired root password

For Step 8 you will select the deployment size of the new appliance. These in every environment will vary and always plan for anticipated future growth.

The next step will be to deploy a datastore, I will personally be deploying and will select a storage location, I will select storage and will enable ‘Thin Provisioning”

The next step to select the portgroup assigned to the desired network and a temporary IP address for the VCSA because at the end of the upgrade, all the network settings remain the same for the new appliance.

This is the final configuration for this part of the upgrade, it will be followed by confirmation and then waiting for installation to complete.

Once the installation is completed, you should receive the following confirmation, from here you can prompt, notice that you do have a temporary VAMI interface to the new vCenter in the event you have to do any troubleshooting. The installer should continue.

The beginning of the wizard should only prompt one option and that is the 2nd step. Click Continue.

These are the warnings that appeared in my environment, these should allow me to proceed

The next step is to select what information do you want copied over, I personally want to choose both, my environment is smaller. Click Next.

For the final steps, it will be the option to join CEIP followed by confirming you performed backup and then kicking off process.

During the process you will lose connectivity to your vCenter, you can always look for one of the hosts the vCenter is residing in and monitor from console.

And just like that…we upgraded to 8 successfully.

For future blogs I will try and dive into vSphere 8 features more in depth.