Home Lab, VCF, Video

Changing the Datastore Type of a Commissioned ESXi host in VCF [Video]

Did you select the incorrect Datastore type when commissioning an ESXi host? I was going through this as I was trying to deploy a second cluster in my Management Domain and thought “Oh No!” I previously imported hosts using JSON [Blog] and had selected ‘VSAN’ versus ‘VSAN_ESA’.

The following short video on my channel goes through the Decommission and Commission process of an Unassigned host.

Cloud Management, Home Lab, Networking, NSX

BGP Peering NSX Tier-0 with Ubiquiti UDM Pro

After several months, attempts, and VCF rebuilds to get BGP peering with VMware Cloud Foundation, NSX, and my Ubiquiti Unifi network, it finally happened. I wanted to share my experience. This required a lot of help from internal rock star TAMs in our organization, external blogs, and Ubiquiti support. By no means considered an expert in networking, especially when it comes to routing protocols.

Running UDM Pro, firmware 4.1.13, and Network app version 9.0.114. A recent release of UDM allowed the functionality of BGP (via the GUI). I found others in the community who got BGP working on earlier versions of UDM Pro. Check out Chris Dook’s blog.

Not to get too deep into the VMware stack, I deployed an NSX Edge Cluster consisting of 2 nodes out of SDDC Manager. When viewing the Networking Topology in NSX, the following IPs you can see are the Edge interfaces, which will need to be defined in your configuration file

My order of deployment was off a little as I was troubleshooting. I was in the middle of my Edge Cluster deployment, but if I could do it again, I would make sure my configuration on the Ubiquiti is done.

During the Edge Cluster deployment, you can go into NSX and do a few things, it might be stuck on a failed task if BGP is not working, luckily I was able to Restart Task and complete it. I must say the SDDC Manager appliance was resilient, considering I had to reboot my UDM Pro several times in a span of 2 days.

Configured 2 BGP Neighbors, one for each network containing their respective interface subnets, you can see I have one neighbor for the .3 and .4 subnets.

I saved the following in a Notepad file and gave it a name. It was <name>.conf, initially I was using the default frr.conf, but the UDM appliance already has that file, so it was suggested by support to not use the same name.

!
router bgp 65000
bgp router-id 192.168.100.254
!
neighbor 192.168.3.2 remote-as 65001
neighbor 192.168.3.3 remote-as 65001
neighbor 192.168.4.2 remote-as 65001
neighbor 192.168.4.3 remote-as 65001
!
address-family ipv4 unicast
  redistribute connected
  redistribute static
  redistribute kernel
!
  neighbor 192.168.3.2 activate
  neighbor 192.168.3.3 activate
  neighbor 192.168.4.2 activate
  neighbor 192.168.4.3 activate
!

  neighbor 192.168.3.2 soft-reconfiguration inbound
  neighbor 192.168.3.3 soft-reconfiguration inbound
  neighbor 192.168.4.2 soft-reconfiguration inbound
  neighbor 192.168.4.3 soft-reconfiguration inbound
!
exit-address-family
!
!

From the Unifi Network UI, go to Settings >> Routing >> BGP and you can create the entry in there, upload the configuration. Once that is completed, connect to the UDM CLI.

The following commands will need to be added and saved to the running config.

root@UDMPro:~# vtysh

Hello, this is FRRouting (version 8.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

frr#configure terminal
frr(config)# ip prefix-list ALL-ROUTES seq 5 permit 0.0.0.0/0 le 32
frr(config)# route-map EXPORT-ALL permit 10
frr(config-route-map)# match ip address prefix-list ALL-ROUTES
frr(config-route-map)# exit
frr(config)# router bgp 65000
frr(config-router)# address-family ipv4 unicast
frr(config-router-af)# neighbor 192.168.3.2 route-map EXPORT-ALL out
frr(config-router-af)# neighbor 192.168.3.3 route-map EXPORT-ALL out
frr(config-router-af)# neighbor 192.168.4.3 route-map EXPORT-ALL out
frr(config-router-af)# neighbor 192.168.4.2 route-map EXPORT-ALL out
frr(config-router-af)# exit
frr# write memory

For the instructions above, please ensure to type exit until you’re at the frr# prompt and then ‘write memory’

So, what happens next? Let’s verify our routes are being advertised and learned from NSX. There are other ways to validate, however, this is what I was chasing down, and it ultimately resolved my Edge Cluster deployment.

From one of the NSX edges (in admin mode) you can access the Tier0 and run ‘get route bgp’ and we can see which paths are connected via bgp

This is all in my personal lab, please do not rely on these to deploy to a production environment, use caution, and consult with a partner or any professional services.

Home Lab, VCF

Upgrading a VMware Cloud Foundation 5.2.1 Management Domain

In my previous blog, I documented upgrading the SDDC Manager to the latest version available. Now that it’s completed, I will be moving onto the Management Domain.

I also recorded the steps on video if anyone wants to watch or follow along in their labs.

From the SDDC Manager UI left-hand navigation pane, select ‘Workload Domains’, select the domain to browse, and go to ‘Updates’

Pre Checks & Downloading

You will find that I previously ran some pre-checks and tried addressing those; lots of those were VCSA file-level backups in the VAMI, incompatibility issues with vSAN (this is expected for my case with my homelab hardware).

Scroll a little further and check out the versioning for the existing components.

When I go back up to the PreChecks, I have the option to Silcen Alerts so that I can resume upgrades.

I will silence the first one regarding the disk group mode and selecting All Clusters option..now the errors do continue to appear, so not entirely sure it’s really silencing.

Going back to our Updates section under ‘Available Updates’, you should see ‘Plan Patching’

Here we can select our targeted versions and validate because we upgraded SDDC Manager Independently.

Once validated, you will find our updated sequence

After clicking ‘Done’, we now have the option to Schedule or Download Now

Starting Step 1 – NSX Upgrades

Once the downloads complete successfully, an update to click ‘Configure Update’ should appear; once clicked, this will bring up a wizard-like screen. Because I have no NSX Edges, this part will be skipped to step 3,

I will select to upgrade the hosts sequentially, click Next and then click ‘Run Precheck’

If the Precheck succeeds, proceed with scheduling

The first step will be Review, and then you can pick ‘Upgrade Now’ or ‘Schedule Update’

In our Update section, we now have an update in progress

We can click on View Status and get more detailed steps of the NSX upgrade

You can also monitor the Upgrade Sequence from the Domain ‘Update’ section

While waiting, if you’re curious what is happening from the NSX console, you can see it’s telling you about an upgrade and that it’s managed through SDDC

The upgrade completed successfully..

Starting Step 2 – vCenter Bundle Download & Upgrade

Our Available Updates will now move onto the vCenter, we can click Download

Once the download completes, click ‘Configure Update’

The upgrade wizard for the vCenter upgrade appears. There is a mention of upgrading vCenter with ‘Reduced Downtime Upgrade’. There is a link to learn more about it, essentially, it will deploy a new appliance,e and you can plan a maintenance window to perform the remainder of the migration.

Confirm your backup options, and as part of the vCenter upgrade, a temporary IP address is required so that the new vCenter is stood up in parallel (pre data-migration)

There are options to schedule the preparation and switchover, but I’m doing everything to run immediately. The next screen will be to confirm and review. Click Finish.

We now have an upgrade in process

If we hop over to our vCenter, there is the newly deployed appliance

After a while, the upgrade succeeded

Step 3 – vSphere 8.0 ESXi Host Upgrades

Before going too deep on lifecycling the hosts, there is some pre-requisite work setting up and configuring a targeted vLCM image on a cluster.

Now onto step 3, in our same Update section, we will kick off the Download

If we stop and check in on our overall upgrade, you can see the final step will be the ESXI upgrade, but there a few things we need to do.

Step 3a – Prerequisite before kicking off Step 3 – Configuring vLCM Cluster Image

From the vSphere client, select the cluster we want to upgrade, click ‘Updates’ and then ‘Edit’ the existing image

We’re going to select the latest version available

Click ‘Save’ and some compatibility checks will kick off. Go over to the SDDC Manager UI and access the Lifecycle Management >> Image Management, select ‘Import Image’

Select the Workload Domain and cluster underneath it. Continue with naming the Image and then extract the Image.

Once the task is complete, we go over to the domain we’re updating and back into ‘Updates’ to resume configuring our update. It may be at ‘Configure’ or ‘Reconfigure Update’

Select the cluster, and the next step would be to ‘Assign Image’..Look at that..we do not have an Image. Let’s cancel out and go get an Image created.

From the drop-down menu, select the image you extracted and click ‘Assign Image’ – Click Next

Customize selection depending on your environment, I’m choosing the following

The final step should be to review settings and click ‘Run Precheck’

Once the Pre Check completes, you will be presented with a ‘Schedule Now’

Run through the schedule wizard to select immediately or for later, and monitor the upgrade.

Home Lab, VCF

Guide to Upgrading ESXi Hosts Using CLI

With recent articles I published about Commission ESXI hosts (see here), with VMware Cloud Foundation, I prepared 3 of my physical servers with an ESXi image and not managed by vCenter. I want to commission these hosts to SDDC Manager 5.2.1.1, however, during my initial install, the version was no longer compatible. During validation, I came across the following error

This led me to want to upgrade the hosts via CLI and as extra help for others out there.

With that, I needed to resort to either re-image the hosts entirely or perform an in-place upgrade, and that is where the traditional esxcli command came to the rescue.

Please see Broadcom Article ID: 380215 regarding changes to esxcli syntax with upgrading hosts from host CLI

The SATADOM on my SuperMicro E300 servers is only 64GB, so I needed some additional options to mount a datastore, that is when I decided to just use iSCSI storage to mount to the standalone host and upload the binary up to that datastore, and this would now be available to all my hosts once I get a iSCSI Software Adaptor added.

This upgrade requires a reboot. To learn more about vSphere 8.0 ESXCLI, please refer to Broadcom Documentation here. Next step is to SSH into the ESXi host and from CLI place the host in Maintenance Mode so that we can safely reboot after the image is updated.

Placing the host in Maintenance Mode

esxcli system maintenanceMode set --enable true

Verify Host is in Maintenance Mode, should report back with ‘Enabled’ (and you can check in the Host UI)

esxcli system maintenanceMode get

The next command is to obtain the Image Profile of the uploaded *.zip file. See example below

esxcli software sources profile list --depot=<depot_URL>

Here are our images, you will want to Copy either the no-tools or standard image

The next command will perform the actual update to the image. I had to add the ‘ –no-hardware-warning’ because an error was thrown out relating to hardware support, however I had another host have it go through, you can try with or without.

esxcli software profile update --depot=/vmfs/volumes/67a7e705-b1512891-3923-5847ca7a9948/esxi803c/VMware-ESXi-8.0U3c-24414501-depot.zip --profile=ESXi-8.0U3c-24414501-standard --no-hardware-warning

The result should bring you to what VIBs were installed, removed and skipped.

You can see the Reboot Required, and from the CLI just type ‘reboot’ and the host will reboot.

Once the host is back up, from the UI check the profile

To validate the final reason I went through all this, the host can now successfully commission in SDDC Manager.

Happy Homelabbing!

Home Lab, VCF

Upgrading VMware Cloud Foundation SDDC Manager (Independent) from 5.2.1.0 to 5.2.1.1

The following article is a simple step-by-step walkthrough of going through the steps of upgrading VMware Cloud Foundation, starting with SDDC Manager and then a Management Domain. You can read more on Broadcom TechDocs for the Independent SDDC Manager Upgrade using the SDDC Manager UI

Please note that this is being performed in a personal home lab. Always take precautions and contact Broadcom support or any services through a partner for guidance.

From the SDDC Manager, go over to ‘Lifecycle Management’ on the left-hand navigation pane and select ‘SDDC Manager’ and you should find a package available to download. Once you click Download, the download will begin.

Once the download completes, you will find two options, we will run the Precheck first.

A message where the Download status appeared will later appear with a Precheck complete confirmation and an option to click View Details.

The precheck clears; this is a clean instance of SDDC Manager as it was recently deployed.

Scroll a little further down and look at the various tests

When you go back to the SDDC Manager initial screen, you can kick off the update by clicking ‘Update Now’

Please follow the documentation. Going into this, I purposely did not snapshot the SDDC Manager because I wanted to test what it was going to do.

Click ‘Start Upgrade’

Status updates

The primary UI will switch in a moment however you can go over to the tasks and check the status

Screen should cutover to the following to provide the upgrade process

Monitor all the various updates, it’s good to know these orders to help with any troubleshooting if needed

Once completed, the option to click ‘Finish’ will be there and then the SDDC Manager UI screen should load.

The next set of updates should be a Management Domain and it’s respective components.