Home Lab, Networking, VCF

Configuring Centralized Connectivity Networking with BGP in VCF 9.0

VCF 9.0 introduced a new way to setup and configuring networking. The two models that can be configured depending on your business’s needs. There is Centralized Connectivity & Distributed Connectivity. Not going into full detail with the difference of those two, please read about it here, Setting up Network Connectivity.

Some of the plumbing of the network, it’s all Ubiquiti Unifi system, ESX hosts have dual 10GB interfaces each. My choice of routing is BGP, read more about my experience here; BGP Peering NSX Tier-0 with Ubiquiti UDM Pro. This will be key as I want my virtual networking to traverse the physical (North/South)

My blog on BGP Peering is pending an update. I will work on an update, however should get the peering to work.

Deployment

From the vSphere client, highlight the vCenter instance, in the right pane navigate to Networks >> Network Connectivity.

You will be presented with a Gateway Type, ours will be Centralized Connectivity

Networking Prerequisites

This is where we configure our Edges, select our size and give a unique name for our Edge cluster object.

Clicking ‘Add’ will bring up configuration for the first Edge node. In this section it will be mainly management configuration for the Edge as well as Edge TEPs.

The bottom was cut off, here is the remainder

Once we save, we have an option of Adding or Cloning the configuration.

Fill out the unique settings for the clone

There is an option to have passwords auto generate or you can specify ‘admin’ & ‘root’ passwords.

Example of filling it out

The final step before Deploying will be the ‘Workload Domain Connectivity’, this will contain BGP related configuration and Edge gateway uplink configuration.

The next series of screenshots are configuring 2 Gateway uplinks for each edge appliance.

I have 2 physical networks created (vlan 3 & vlan4) as interfaces specifically for overlay and North-South communication.

The final step is Deploy. Take a look at some of the photos post-install.

The final step is to perform an Inventory Sync in VCF Operations. This let’s Operations manage monitoring and password Lifecycle.

Under Fleet Management >> Passwords, Edge appliances are not there.

From VCF Operations >> Inventory, select the Management Domain and perform a Sync Inventory

After some time, the additional accounts will appear under Fleet Management/Passwords

VCF

Upgrading a vCenter 8.x Workload Domain to 9.0.1.0 in VCF Operations – Part 2

In a previous blog I covered importing a vCenter instance with a single cluster see here. and if you go to Part 1 of the blog, I covered performing a configuration update needed for the newly imported Workload domain.

The next step is to go back into Lifecycle for the Workload Domain and Configure Update

The wizard for the NSX Precheck will begin, click Next. In my homelab, I have no Edge clusters configured so clicked Next and straight to Run Precheck.

Once the Precheck completes, we can then move forward with performing a Schedule Update and that will give you an option to perform right away or schedule at a later time.

Once the update has kicked off, you can click ‘View Status’ and collapse to view all the components

Hey! Look at that, Step 1 is complete..onto Step 2. Click ‘Configure Update’

This next step will be upgrading a vCenter Server appliance from 8.0 u3 to 9.0. Because this is a single vCenter, we can leverage the Reduced Downtime Feature, but in this case, we will go straight into it.

Confirm Backup

Provide a temporary IP address for the new vCenter (No DNS record required, however optional if you want to reserve for future use.

Schedule the update and switchover options, we’re going for Immediate. The final step is reviewing everything and then proceed.

vCenter upgrade completed!

Home Lab, NSX

Enable SSH Service on NSX Controllers Using API w/ Postman

In my home lab, I try to find little tasks and find a way I can repeat these tasks quicker, easier, and perhaps even more securely. Everything I share can be performed in many different ways, my importance is finding a new way every time.

As a security measure, I chose to leave SSH disabled when deploying my NSX Controllers and now I need to access my managers so that I can perform some commands. Rather than typing in a long complicated password in a VMware console, I wanted to execute this via API using Postman. (This also allows me to dig in and learn more about Postman)

If you browse out to the VMware By Broadcom Developer site, API reference documentation is available, simply bring up the site below and you can do a search for ‘SSH’ and you will find the SSH-related API calls.

NSX-T Data Center REST API – VMware API Explorer

The following call will get the status of SSH on an individual NSX manager.

GET https://<nsx-mgr>/api/v1/node/services/ssh/status

If you want to review the properties of the SSH configuration, run the following

GET https://<nsx-mgr>/api/v1/node/services/ssh

For the final step, we want to finally enable SSH on the controller by running

POST https://<nsx-mgr>/api/v1/node/services/ssh?action=start

and we are in

Reference the API Documentation listed at the beginning of the article, the commands are relatively the same just have parameters for ‘stop’, ‘start’, or ‘restart’

Home Lab, NSX

Fix NSX 4.1.0 ‘Install Skipped’ During Host Preparation in a vLCM Cluster

With VMware vSphere 8.x out and vSphere Lifecycle manager making the shift from individual baselines to cluster images, there are some additional encounters you may have when integrating with our solutions from VMware or even other vendors.

I encountered an error recently in NSX 4.1.0.2.0.21761693 during host preperation I received the following error.

When clicking on the error for details and steps, you see

Go to the VMware Cluster >> Updates >> Image

You can perform an Image Compliance check manually or you will find there is my problematic host not showing compliant because it is missing NSX vibs

Click ‘Remediate All’, review your remediation settings and click ‘Remediate’. Once Remediation completes, I decided to reboot the host and once it came back up, inside NSX Manager I located the node and to the far right clicked on ‘View Details’ and click ‘Resolve’ to the prompt.

Monitor the installation status

This completed successfully, as the host now shows as prepared and ‘Success’

Home Lab, NSX

Joining Individual VMware NSX Managers to form a Cluster via CLI

I’ve deployed 3 NSX Managers individually from the NSX OVA onto a single vCenter. By having 3 individual Managers, I have the option to create multiple clusters from each one (probably excessive and incorrect in my case). Instead my goal is to join all 3 individual managers to form a 3-node cluster and then assign a VIP.

For this process, I will be following VMware documentation that is provided here: Form an NSX Manager Cluster Using the CLI

My 3 NSX managers I will be referencing and joining are nsxcon1, nsxcon2 & nsxcon3

Here is an example of nsxcon1 UI reviewing the ‘Appliances’ section, you can see there is only a single appliance and an additional one cannot be added until a ‘Compute Manager’ (such as a vCenter) can be added.

I did verify CLI connectivity to each of the appliances by running

get cluster status

This command will return cluster health for the NSX Manager and any appliances that are part of the cluster, for this example, it’s only a single appliance

From the first NSX controller you will want to obtain the thumbprint by

get certificate api thumbprint

That will provide you the thumbprint of the targeted appliance

Moving onto the other node (nsxcon2) which we want to join to nsxcon1, we will use the following command

mgr-new> join <Manager-IP> cluster-id <cluster-id> username <Manager-username> password <Manager-password> thumbprint <Manager-thumbprint>

Here is an example of what it looks like when populated in that command and ran from the node we want to join to our primary one.

*Please ensure you have taken appropriate backups as this will take this node and try and join it to another cluster, being this should be a vanilla install, should not be too much to have to re-deploy.

After a couple of minutes we do receive the following prompt

We can then go back to nsxcon1 and verify with ‘ get cluster status’ and see that the cluster status is ‘DEGRADED’ however this is normal while the node is completing it’s process with joining and updating the embedded database.

We can take our ‘join’ command earlier we used on nsxcon2 and then run it on nsxcon3 again.

After running it, going back to nsxcon1 and checking cluster status..we now have 3 appearing

After a few minutes, our GUI has been fully populated with all NSX Managers reporting as stable

As a cherry on top, we will click on ‘Set Virtual IP’ and assign a dedicated IP address which also has it’s down DNS record.

There is our new virtual IP which has been assigned to one of the nodes