I’ve been going through upgrades in the homelab and one of the changes has been to prepare for destructing an existing vSAN cluster and creating a new vSAN cluster. While going through vSAN ESA configuration, disks were not showing as available, I needed to go in and delete the existing partitions for the old vSAN cluster.
When in vSphere from a host and attempting to ‘Erase Partition’ from a storage device, we encounter the following error, below that is the error in Tasks.
SSH into the host and run the following command to verify disks are still part of a vSAN Disk Group
esxcli vsan storage list
In my case, I had all 3 disks appear, the following command removes the disks from the group. First you will need to obtain the VSAN Disk Group UUID
esxcli vsan storage remove -u <VSAN Disk Group UUID>
After running the command it will take you back to CLI prompt and you can confirm the disk group is empty by re-running the first command ‘esxcli vsan storage list’
Go back into the Storage Devices and retry the Erase Partition, I live dangerous so I did all 3 at once 🙂
It completed, validated in Tasks that partitions were updated succesfully.
Alright..dove right in and decided to get my management cluster upgraded to 8 after getting pre-requisites upgraded such as vROps, LogInsight, vRealize Lifecycle Manager & NSX.
My NVME drives in the hosts are not on the VMware vSAN HCL, but thankfully was able to ignore that in the Remediate settings options. I do have TPM 2.0 chips with Host Encryption enabled, so far no errors.
First step will be to take a snapshot of the vCenter, if you are running Enhanced Linked Mode, ensure you power all vCenters off and take cold-snapshots from the Host UI.
Because the upgrade deploys a new vCenter appliance, we will be renaming our existing VM object from ‘vCenter’ to ‘vCenter_old’
Accessing good old fashioned ui-installer wizard, will be selecting ‘Upgrade’
This will be a various of steps, for Step 1. It will be ‘Deploy a vCenter’, this step is to begin the deployment of a new VCSA (vCenter Server Appliance)
After accepting EULA, the next step will be to ‘Connector to Source Appliance’ this would be the hostname of the VM (not the VM object name in vCenter)
I will then put in the landing vCenter I want to deploy the new appliance too.
For Step 5 you will select a Folder location for the VM, followed by Step 6 which is select a Compute Resource.
Step 7 will ask for the name of the new VM appliance and desired root password
For Step 8 you will select the deployment size of the new appliance. These in every environment will vary and always plan for anticipated future growth.
The next step will be to deploy a datastore, I will personally be deploying and will select a storage location, I will select storage and will enable ‘Thin Provisioning”
The next step to select the portgroup assigned to the desired network and a temporary IP address for the VCSA because at the end of the upgrade, all the network settings remain the same for the new appliance.
This is the final configuration for this part of the upgrade, it will be followed by confirmation and then waiting for installation to complete.
Once the installation is completed, you should receive the following confirmation, from here you can prompt, notice that you do have a temporary VAMI interface to the new vCenter in the event you have to do any troubleshooting. The installer should continue.
The beginning of the wizard should only prompt one option and that is the 2nd step. Click Continue.
These are the warnings that appeared in my environment, these should allow me to proceed
The next step is to select what information do you want copied over, I personally want to choose both, my environment is smaller. Click Next.
For the final steps, it will be the option to join CEIP followed by confirming you performed backup and then kicking off process.
During the process you will lose connectivity to your vCenter, you can always look for one of the hosts the vCenter is residing in and monitor from console.
And just like that…we upgraded to 8 successfully.
For future blogs I will try and dive into vSphere 8 features more in depth.
In preparation for vSphere 8 upgrades, I’m in the process of upgrading many of the solutions in the homelab before upgrading to the big 8.
I’m currently running NSX-T 3.2.1 with NSX Manager appliances. I have NSX deployed out to a cluster with a couple of Edge appliances in a cluster configuration.
For those that might’ve missed the word, it was announced early-2022 that NSX-T 3.x would be no longer and that the naming would be shifting to NSX with versions 4.x going forward. You can read more about this here.
The first step was to ensure I have a recent ‘Successful’ backup from within NSX-T manager itself.
When you go out to Customer Connect Downloads, you will want to download the NSX 18.104.22.168 *.mub Upgrade file.
Once the file is downloaded, I chose to upload it from my local system where I was using my browser to access the NSX interface.
Once the file uploads, the next step will be to click ‘Prepare for Upgrade’
The following process will take some time, you might even be prompted with a session timeout, In my instance I came up with the error “Repository synchronization operation was interrupted. Please click on resolve to retry. Repository synchronization failed. a ‘Retry’ ran and it completed the check successfully.
Once this process completed, it took me to step 2, the manager console will reload.
Click on the drop-down and select ‘All Pre-Checks’
After reviewing the results of the Upgrade, I reviewed the alarms and felt I wanted to move forward with the upgrade. The Edges are alarming due to Memory Consumption and the Manager alarms were relating to the NSX ‘audit’ account.
I selected to run the upgrade in ‘Serial’ and selected ‘After each group completes’ for Pause upgrade condition. click ‘Start’
The Edge upgrades completed successfully, I will click Next for the Hosts. There is also an option to ‘Run Post Checks’
The post checks ran fine and the next step is start the Hosts for upgrades.
The hosts upgrade completed successfully, I even ran the post upgrade check and it succeeded. The only gotcha for my cluster was I had to manually move some VMs onto other hosts and power some VMs down to conserve on resources, so once that was done, hosts entered maintenance mode.
The final step which was to upgrade the NSX Managers, click ‘Start’
So the upgrade failed immediately as the ‘audit’ account came back to bite me, there was a strange behaviour where when I update the password, the account was still showing a Password Expired status, I ‘Deactivated’ and ‘Activated’ the account and it showed Active. Once that was completed, the message also stated to take action on the ‘Alarm’ in NSX, so I went back and Acknowledged and Resolved alarms and did not leave any in Open status.
The upgrade will allow you to continue once you navigate back to System>>Upgrade. Go back to Step 1 and run the “Pre Check” for NSX Manager Only, before proceeding to the final upgrade step.
The upgrade completed successfully. You will notice the top left-corner banner will now only read ‘NSX’.
Whether you are running VMware Cloud Foundation (VCF) or standalone vRealize LifeCycle Manager instance managing your vRealize products, the following management pack for vRealize Operations will help give a health monitoring dashboard to your solutions. You can monitor capacity growth over time as well as certificate monitoring.
You can obtain the management pack couple of different ways. One way is through the VMware Marketplace, download the pack and upload to the vRSLCM via SCP.