Home Lab, vSphere

Achieve Trifecta – VMware Image-Based Lifecycle – ESXi, Vendor Drivers & Firmware w/ HPE OneView for vCenter Server

I’ve written a couple of articles about VMware’s Cluster-based Image lifecycle. I wanted to build on vLCM in this blog post on it’s integration with a hardware vendor integration which supports a HSM (Hardware Support Manager..see About Hardware Support Managers). The vendor’s supported HSM allows for your hardware vendor to support the firmware lifecycle using vLCM. In the following blog I’ve already pre-configured HPE OneView for server management and HPE OneView for VMware vCenter which deploys a plugin to a vCenter, I like to see it as a shim.

Please understand that I do not work or am endorsed by HPE in any way and that some of the references HPE solutions require particular licenses, please seek an HPE account team or partner.

Integrating vCenter with HPE OneView for VMware vCenter

Once authenticated to HPE OneView for VMware vCenter >> Go to the drop-down menu and select ‘vCenters’

Click ‘+ Add vCenter’ , input your vCenter information and you can click ‘Add’ or if you want to add this one and have the input fields for another one prompt, click ‘+ Add’

Once saved, if you go to your vCenter, you will find that plug-ins deployed

I have 2 vCenters in ELM, although it shows deployed to my second one, it does not recognize as an HSM, it will require adding the second vCenter in HPE Oneview for VMware vCenter

Under vSphere Administration >> Client Plugins, the plugin was successfully deployed. This will require a refresh of your browser.

Please review VMware by Broadcom KB regarding ‘ Local Plugins’ deprecated – Article ID: 313839

Now if you browse to your vSphere Client menu browser you should find the ‘HPE OneView for VMware vCenter 11.6

We now need to register our OneView instance with the plugin.

From the HPE plugin screen go to ‘HPE OneView Credentials’ click the green + and input your information and click Test and Save.

For the next step, I want to Register a HPE SPP (Service Pack for Proliant) which is a firmware bundle for Proliant line servers. Under the same HPE plugin section, go to ‘HPE OneView Service Pack Management’ and if your plugin can successfully talk to OneView, it should find a SPP. **Note that adding SPPs, firmware, etc. to OneView will be a separate task within OneView, this can vary depending how you manage your servers.

Successful confirmation

Upgrading our vSphere Cluster to vSphere 8

We have an existing cluster that is already Image-Based and has a HPE ESXi image deployed along with an HPE Customization (driver pack for HPE Proliant). You will notice the one component missing is the Firmware and Drivers Addon

Now to Edit the Image

I selected my latest ESXi version as well as the latest HPE Customization pack and next, I will click ‘Select‘ under ‘Firmware and Drivers Addon

We now have a supported Hardware Support Manager for firmware lifecycle, select the only one

The HSM will poll and recognize the SPP we registered with vLCM earlier and you can see we have different versions available depending on your version, I will select the version supporting 8.0.3

Now that we have our build-out, we will want to Validate first

Image is Valid, good first step, this means we achieved some interoperability between the stack

Now we can click ‘Save’ and perform Check Compliance

From here you can perform your pre-checks and begin remediation on host(s). Your pre-check should show a nice drift comparison

Due to the limitations of not having physical Proliant servers, I’m unable to demo the firmware lifecycle itself.

The following are some helpful links for learning more and getting started.

HPE OneView Partner Integrations

OneView for VMware vCenter 11.6.0 User Guide

Check out the following HPE YouTube video on instructions for deploying the HPE

Home Lab, vSphere

Configuring Predictive DRS on VMware vSphere Clusters

VMware’s vSphere DRS (Distributed Resource Scheduler) already does a great job with moving workloads during burst moments when configured on a cluster, but how about moving the workload in advance prior to a workload burst to ensure resources are available.

With vSphere integration into VCF Operations (formerly Aria & vRealize) you can now send DRS analytics into VCF Operations so a decision to migrate a workload is executed by DRS prior to the anticipated burst.

There are some important notes to take note in order for these actions to take place, you can read more about it here; Predictive Distributed Resource Scheduler (pDRS)

From the settings of a VMware cluster, highlight a Cluster >> Configure >> vSphere DRS >> Edit

By default, ‘Predictive DRS’ is not enabled and to enable, just click the checkbox. To learn more about it, click on the information circle and a Help pop-up will appear.

We’re not done yet, our next step is to go into VCF Operations >> Administration >> Integrations >> Edit a vCenter integration >> Advanced Settings

In the Advanced Settings section there is an option to enable the option to send DRS analytics to Aria Operations.

Click ‘Save’

Home Lab, vSphere

Remediating VMTools using PowerCLI – Silent Install w/ No Reboot

I’m a big fan of remediating VMTools using vLCM (vSphere Lifecycle Manager), but some customers want to have the capability to script the process, not want to reboot the VM right away.

In the upcoming instructions, these are performed in my personal HomeLab. I have a handful of VMs in a CSV file I will create a variable with, take a snapshot, install VMTools silently and not reboot the VM so that it can be rebooted at a later date such as a Guest OS patch window or planned maintenance window.

Please be sure to always follow VMware By Broadcom best practices and always test in a lab environment.

The following 6 VMs saved in a CSV file on my local drive.

The following script is straightforward, it will take the VMs and create a variable, then use that variable to create your snapshots and eventually perform the Tools Update silently with no reboot.

#Creating a variable containing VM Objects from a CSV file
$VMToolsPatching = Get-Content C:\scripts\VMToolsList.csv

#Take Pre-Upgrade Snapshots
Get-VM -Name $VMToolsPatching | New-Snapshot -Name "VmTools Lifecycle" -Description "Snapshot for VMTools lifecycle" -Memory:$false -Quiesce:$false -Confirm:$false

#Perform VMTools Updates
Get-VM -Name $VMToolsPatching | Update-Tools -NoReboot

Once the snapshot and patching complete, the VM should not reboot, however the VM’s Summary page will show the Tools version is Current

On a Windows machine, if you log onto the desktop, you will find the VMTools icon indicates a small reboot icon and inform it’s pending a reboot.

There are many options for rebooting the VM once you can determine a maintenance window.

Alternate Reboot Options

While these are not the only options, nothing beats good Ol ‘fashion ‘Restart Guest OS’ via Tools, but want to share a couple of options, ultimately the solution may depend on number of VMs or fits operational procedures.

One feature in vSphere 8.0 u3 is scheduling tasks on individual VMs such as the following

If you have VCF Operations (formerly Aria & vRealize) you can use Automation Central to schedule the reboot of one or many VMs. The following is just an example of a configured Action to Reboot VMs.

Home Lab, vCenter Server, vSphere

Upgrading to vSphere 8.0 U2c using Lifecycle Manager Cluster Image

Suppose you’re running VMware vSphere Foundation (VVF) or any vSphere configuration today outside of VMware’s flagship VMware Cloud Foundation, there are still options to help ease the pain of lifecycle management of your ESXi hosts.

You can learn more about Lifecycle Manager and what I cover in this blog by checking out Creating and Managing vSphere Lifecycle Manager Clusters

To give background on my home lab, the following is a 3-node vSAN cluster running Center 8.0.2 Build 23929136 (8.0U2c). I also have Aria Suite Lifecycle Manager and NSX 4.1.x integrated with vCenter. This cluster has already been converted to Image-Based, I’m simply performing the upgrade.

Click ‘Edit’ and you will find if there are any available ESXi versions you can go to.

I selected 8.0 U2c 23825572 and clicked ‘Save’. Wait for Cluster task to complete. Once completed your Image should no longer be complaint and details will be provided.

Before kicking off Remediation, click on the option menu next to ‘Check Compliance’ and select ‘Edit Remediation Settings’

We still have options such as Remediate All or Stage All, today I will perform ‘Remediate All’

Review the Impact Report and if there are any other settings you want to modify

From the ‘Update’ section of the Cluster object, you can review the status, so far esxi03 is in Maintenance Mode and rebooting.

Be sure to check for any VM / Host Affinity rules on the cluster.

The remediation completed, all hosts are back online and the Image is compliant with the new version.

As a bonus…my hosts contain TPM 2.0 chips and there were no issues with host attestation with vCenter.

Automation, Home Lab, vSphere

Creating & Remediating Image Based Clusters with VMware PowerCLI

My time as a Systems Administrator came to an end 3 years ago, I had the self-realization that automation was not in my arsenal and the time needed to learn was not always there. HOWEVER..the relentless side of me still wants to learn a few new things.

While everything I did in the following blog could’ve been performed by a few single clicks within vSphere Client, performing the processes at scale or repetitive for any testing, automating makes much more sense.

The article assumes someone has some familiarity with PowerShell, PowerCLI, and VMware technologies.

To obtain the latest PowerCLI package, go to PowerShell Gallery – VMware.PowerCLI. Also, visit VMware Developer Documentation – PowerCLI

As always please take precautions and test these out always in a Test/Dev environment before executing in production environments. Also, this method is not the only way, there is always room for improvement.

This script should accomplish the following tasks:

  • Create a new cluster
  • Add 3 new 7.x ESXi Hosts
  • Place hosts in Maintenance Mode
  • Configure the cluster for Image-Based
  • Add VMware Tools 12.3.5 Component to the Image Cluster
  • Remediate the new cluster in Asynchronous

As a start, you may want to find out what targeted ESXi version you want to go to, any Components, or even Vendor Addons. You’re essentially querying everything available in the vLCM repository. My hosts are at 7.0u3g and after remediation should be at 7.0 U3o

The following will pull all BaseImages which are ESXi 7.x builds.

Get-LcmImage -Type BaseImage -Version '7*'

The only component we want to add/update is VMTools, the following will check for the latest version.

Get-LcmImage -Type Component *tools*

Please take the time to review the code below and replace variables and any sections with your enviornment. By no means it’s perfect but a good leap forward for me.

##The following script will create a new cluster and add 3 newly created hosts to the cluster##


##Creating a variable containing the hosts to be imported into vCenter##
$ESXiHost = Get-Content C:\scripts\<File containing hostnames>.txt

##Creates a new Cluster and adds 3 newly built hosts into vCenter##
New-Cluster -Name "<Cluster>" -Location (Get-Datacenter)
foreach ($ESXiHost in $ESXiHosts) {
Add-VMHost -Server <vCenter hostname> -Name $ESXiHost -Location "<Cluster>" -User root -Password "<password>" -Force }
Set-VMHost $ESXiHosts -State Maintenance -Confirm:$false -RunAsync | Out-Null

##The following section will go through creating variables containing the Base Image and any Components.##

##Creating a variable for base image version##
$esxiBaseImage = "7.0 U3o - 22348816"

##Creating a variable which points to the Base Image in the vLCM repository##
$esxiBaseImageName = Get-LcmImage -Type BaseImage -Version $esxiBaseImage

#For VMTools we are creating a variable containing to point to vLCM Component repository
$esxiCompToolsPackage = Get-LcmImage -Type Component -Version '*12.3.5'

#This command will begin to convert the selected cluster into an Image-Based cluster, remember **This is an Unreversable action** ##
Set-Cluster '<Cluster>' -BaseImage $esxiBaseImageName -Component $esxiCompToolsPackage -Confirm:$false

#This command will begin remediation of the cluster in asynchronous
Get-Cluster -Name '<Cluster>' | Set-Cluster -Remediate -RunAsync -AcceptEULA -Confirm:$false

The hosts in this sample were nested hosts created in the environment. Please don’t hesitate to reach out with any questions or comments.