How to reset the ESXi root password?

When I check my blog, I can see the last post from February 2022. That’s a long time ago already! Time to write something, isn’t it?

Back in the days when I was working as a Systems Engineer for an IT service provider, it was much easier to write blog posts. Now as a “customer” I don’t find the time or the ideas, or maybe I’m forgetting blog post ideas, not sure why. At least that’s my thought. I’m always struggling if I should blog about this or that, is it worth writing about it, or are there gazillions of blog posts writing about the exact same thing?

Today’s blog post is exactly such a topic, I assume, that has been written about already some times, at least. But it was a problem we had during an ongoing vSphere upgrade project just recently. And I was able to help our operations team to move on with their work. So why not write a blog post about it?

What happened?

As mentioned, we’re currently working on a global vSphere upgrade project. We’ve got many ESXi hosts and clusters all around the world. So far nothing special. And even when there are easy-to-understand guides available internally (I wrote these myself and triple verified), maybe one or the other point on a checklist is forgotten, or you just didn’t think of it in the heat of the moment. One point is “Check the current credentials if they are working”. Thanks to the following troubleshooting guidance, there was no show stopper and only a few minutes of delay for the upgrade of one ESXi host.

The root password for one of the ESXi hosts didn’t work. No chance to log in through the Web UI nor SSH. So what to do then?

There are only two officially supported ways to reset the root password of an ESXi host. You can reinstall the host from scratch or use host profiles. Well, reinstallation would be an option as we’re upgrading vSphere anyways. But this would require some additional time because of the ESXi configuration. Using a host profile can be done, but needs an Enterprise Plus license.

Because we have some spare licenses left for Enterprise Plus (not yet needed for hosts, but already planned to use), we decided to go the way with the host profile. And it wasn’t rocket science!

How can you do it?

The actual troubleshooting chapter is divided into two parts. The first part is changing the current license of an ESXi host, the second part is all about the host profile.

If you don’t have an Enterprise Plus license, then you have to plan reinstalling the ESXi server from scratch.

Change the host license

  1. Log in to the vCenter WebClient (https://yourvcenter.domain.com/ui)
  2. In vCenter, go to Home and then choose Administration and then Licenses
  3. Click the Assets tab and then the HOSTS button
  4. In the Asset column, you can click the filter icon and search for the ESXi host where you want to assign a different license
  5. Select the host, then click Assign License just above the list
  6. Choose the Enterprise Plus license, and click OK
  7. The host will now have an Enterprise Plus license, and you can continue with the steps below.

Remember to switch back the license to the one that was assigned to the ESXi host before.

Extract, change, and apply the host profile

  1. Log in to the vCenter WebClient (https://yourvcenter.domain.com/ui)
  2. In vCenter, go to Home and then choose Policies and Profiles, and click Host Profiles
  3. Click → Extract Host Profile
  4. In the Extract Host Profile menu wizard → Select the host you want to update the password for, then click Next
  5. Name the Host Profile and click Next and then Finish to complete the capture of the host profile template
    • The new host profile should appear on the Host Profile Objects Field
  6. Right Click the new Host Profile and choose → Edit Host Profile
  7. In the Edit Host Profile wizard, uncheck all boxes
  8. Then using the search filter search for → root
  9. Highlight and then select the check box for → User Configuration / root
    • Make sure to only select this item when searching for root
  10. A configurable window will display the root user configuration
  11. At the Password subsection, choose → Fixed password configuration
  12. Here you have to fill in the new password and confirm it before proceeding
  13. Double-check that all other non-applicable boxes have no check marks and proceed to Finish
  14. Once the task completes, right-click the new host profile and choose → Attach/Detach Hosts and Clusters → then select the host in the wizard
  15. Right-click the host profile again, and select Remediate
  16. Remove/detach the host profile from the host
    • At this time the host password should be successfully upgraded

Please be careful. It is recommended that you do this when the host is in maintenance mode. If it is part of a cluster, great. You can move all VMs away from that host with DRS (automatically or manually). If it is a standalone host, make sure to shut down the VMs first, just in case the host reboots. During the writeup, the affected host did not reboot, but there was a checkbox in the remediation settings that could cause the host to reboot.

No vMotion possible after ESXi host BIOS update

I was working on some ESXi upgrades recently. We’re currently preparing everything to make the upgrade to vSphere 7 somewhen smooth as silk. That means that we’re rolling out vSphere 6.7 on all of our systems. Recently, I was tasked to upgrade some hosts in a facility some hundred miles away. The task itself was super easy, managing that with vSphere Update Manager was working like a charm. But before the vSphere upgrade, I had to upgrade the BIOS and server firmware to make sure that we’re fine with the VMware HCL.

The second host was done within one hour and received the complete care package. But the first host took a bit longer due to unforeseen troubleshooting. I’d like to share some helpful tips (hopefully they’re helpful).

What happened?

As mentioned, upgrading the ESXi host through the vSphere Update Manager worked like a charm. But before that, I booted the server remotely with the Service Pack for ProLiant ISO image to upgrade the BIOS and firmware of that server. Also, that went very well and. As there are two ESXi hosts at this location, we had shared storage available and we were able to move the VMs from one host to the other without further issues. One host placed into maintenance mode, upgrade, remove from maintenance mode, and the same for the second server. That was the idea.

Read more

Upgrade VCSA through CLI Installer

My team and I were tasked with a global vSphere upgrade on all of our ESXi hosts, hyper-converged systems and our vCenter. We took enough time to get the inventory, check all the hosts for compatibility and test the various upgrade paths. The upgrade will be rolled out in multiple steps due to personal resources (we’re a small team and currently, it’s summer holiday season) and also to avoid too much downtime. In this blog post, I’d like to share some personal experiences regarding the upgrade of our vCenter. It didn’t work as we’ve planned. But in the end, all worked fine. I’d like also to shoutout a big thank you to my team. You guys rock!

Foreword

Before we dive deeply into the vCenter upgrade process, and what happened, I’d like to explain some steps first to better understand our approach and the upgrade process in general.

One of the milestones is (at the writing of this blog post already “was”) the upgrade of our vCenter. We’re using vCenter for our daily tasks like managing virtual workloads, deployment of new ESXi hosts, etc. But before we could upgrade our vCenter from 6.5 to 6.7, we had to do some host upgrades first. Our hyper-converged infrastructure was running 24/7 without getting much care, like care in the form of firmware upgrades. There was just not enough time to do maintenance tasks like this throughout the last few months or maybe years. Maybe some people also were just afraid of touching these systems, I don’t know for sure. The firmware was old but at least the hypervisor was on a 6.0 version and also in pretty good shape as well.

So we’ve scheduled various maintenance windows, planned the hyper-converged upgrades and made sure that we’ve downloaded everything from the manufacturer we need to succeed. The firmware upgrade went well on all hosts. One host had a full SEL log and that caused some error messages. No real issue at all, but some alerts in vCenter on that cluster we had to get rid of.

The firmware upgrade on one of the hyper-converged cluster took about 18 hours. That was expected, somehow, because the firmware was really old, and did not support higher ESXi versions that 6.0. But everything went well and we had no issues at all, expect the full SEL log which then has been cleared.

After that firmware upgrade, we were able to upgrade the ESXi version on all of the hyper-converged clusters to a 6.5 level. This was needed because of some plugins used to manage these hyper-converged systems. Ok, to let the cat out of the bag, we’re using Cisco HyperFlex and the plugin I’m talking about is that HX plugin. The version for ESXi 6.0 wasn’t supported in vCenter 6.7. That’s the reason we had to upgrade the HyperFlex systems first to ESXi 6.5.

As you know for sure, you can’t manage ESXi hosts later than 6.5 in vCenter 6.5. So we had to do a stop here for the moment, but we were now at least able to upgrade our vCenter. All other hosts were already on 6.0 since they were installed, so no issues upgrading to vCenter 6.7.

Oh, did I already mention that our vCenter doesn’t run on-premises but on a cloud provider? No, it’s not VMC on AWS, but some other IaaS provider. That didn’t make it easier.

But let’s dive into the main topic now, enough of explanation, let’s do the hard work now.

Read more

Expand your logical drive to extend a VMFS datastore

Recently, I had to add some hardware to an HPE ProLiant DL380 server. Ok, it wasn’t me because this server runs in a location about 3600 miles away from me. But the engineers on-site completed this task. The engineers added some memory and more disks to the server. My task was to add the newly installed disks to the existing ESXi datastore. This was (still is) a standalone ESXi server, as we say a black box server. It’s a standard HPE ProLiant server with local disks and an SD-Card as ESXi boot disk and it is centrally managed in a vCenter. Local IT persons have limited access to vCenter just to manage their workloads on that specific ESXi black box. There isn’t running much on these black boxes, most of all an SCCM distribution point because lacking enough bandwidth. But anyway, that’s not the topic here.

I want to show you what steps I’ve missed in the first attempt and how I’ve managed to fix it.

As there is not much running on this black box server, it was easy to schedule a maintenance window to shut down the workloads and also the ESXi server, so the engineers onsite were able to install the hardware (memory and disks). Through the iLO interface, I’ve started the server and accessed the Smart Storage Administrator, which is part of the Intelligent Provisioning tool kit on servers of Gen9 and later. It was easy to add the unassigned disk to the already existing RAID array. It took some hours to rebuild all the data because all data had to be redistributed over all disks, with parity and everything needed.

After the server was up and running again, I tried to increase the VMFS datastore capacity. It didn’t work as expected. I didn’t see any device nor LUN which I could extend. That made me curious.

Well then, back to the drawing board…

It wasn’t easy this time to schedule a maintenance window, but I’ve asked the responsible person if he could suggest one. In the meantime, I was digging through the internet to find out what’s wrong or what I’ve missed. I’ve found out that just adding the new disks to the existing RAID doesn’t solve that issue alone. I also had to expand the logical drive. That was the key! So ok, could this be done without another downtime? Thankfully yes!

But before we go deeper here, please, always take a backup of your workloads first. Just in case. Better safe than sorry!

It is indeed possible to expand the logical drive on an HPE ProLiant server without downtime. I’m talking here of a maybe easy, not so complex task. It’s not like I’m going to create new arrays or change the RAID mode. No, just expanding the logical drive.

First, I connected to that ESXi server with SSH to see if the HPE tools were installed. And they were. It’s highly recommended that you use the custom VMware ESXi ISO image to install your server when they come from a vendor like HPE or DELL. These images include all the necessary drivers for your hardware, like network or storage controller, and most of all, they include also some nifty tools as well.

In my case, I’m using the tool “hpssacli”. This tool is just the command line version of the Smart Storage Administrator (HP SSA CLI => Smart Storage Administrator CLI). Nice, isn’t it? 😉

Take me to the CLI, please!

I’ve needed only a few commands to get the things in order. Let’s go into it!

First, I’ve checked the logical drives on the controller:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld all show status

Mostly, the controller is installed in slot 0. I’m talking here about the P440ar which is on-board when I’m not wrong, so definitely on slot 0. With “ld all” it will display all logical drives configured on that controller.

Next, as I’ve got now the logical drive ID, I’ve checked the details for that logical drive:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 1 show

This gave me an overview of that logical drive and I saw that it was just half of the expected size because disks have been added here.

Just to make sure, I’ve checked the controller to see if all disk drives were assigned:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0  show config

The output showed me that all installed drives were assigned and that there were no unassigned drives. As this controller only had one logical drive, all disk drives were assigned there.

Ok, so then it should be possible to extend the logical drive. And it is, with the following command:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 1 modify size=max forced

This command extends the logical drive, so it is going to use the whole disk space as defined in the RAID.

After that, I was able to increase the VMFS datastore capacity through the vSphere web client without problems.

Recap of the latest VMware vSphere 6.7 releases

vSphere 6.7

Oh boy, what a week! Some say that winter is now finally gone, nice and warm weather, not wearing winter jackets anymore. But hey, i’m not a weatherman. When you’re sitting in the office i think it doesn’t matter if it’s raining or snowing outside. Just kidding… Let’s get back to business.

There was some rumor about the next upcoming version. Will it be version 7? Or something just above 6.5? VMware did release several new products versions! And it’s all with version number 6.7. What a list! It’s one of those email notifications that I usually like to scroll down, a little more, and more and more, to get all the news soaked up like a sponge. I’d like to dive in right now and provide you a recap of this weeks VMware releases. And as i said, it’s quite a list. I’ll pick out just some new key features. You can find the full release news on the VMware Blogs (links provided here).

New product versions

vSphere 6.7

  • several new APIs that improve the efficiency and experience to deploy vCenter, to deploy multiple vCenters based on a template, to make management of vCenter Server Appliance significantly easier, as well as for backup and restore
  • significantly simplifies the vCenter Server topology through vCenter with embedded platform services controller in enhanced linked mode
  • 2X faster performance in vCenter operations per second
  • 3X reduction in memory usage
  • 3X faster DRS-related operations (e.g. power-on virtual machine)
  • vSphere 6.7 improves efficiency when updating ESXi hosts, significantly reducing maintenance time by eliminating one of two reboots normally required for major version upgrades (Single Reboot). In addition to that, vSphere Quick Boot is a new innovation that restarts the ESXi hypervisor without rebooting the physical host, skipping time-consuming hardware initialization
  • The HTML5-based vSphere Client provides a modern user interface experience that is both responsive and easy to use, and it’s now including other key functionality like managing NSX, vSAN, VUM as well as third-party components.
  • enabling encrypted vMotion across different vCenter instances
  • enhancements to Nvidia GRID vGPU
  • vSphere 6.7 introduces vCenter Server Hybrid Linked Mode, which makes it easy and simple for customers to have unified visibility and manageability across an on-premises vSphere environment running on one version and a vSphere-based public cloud environment, such as VMware Cloud on AWS, running on a different version of vSphere.
  • vSphere 6.7 also introduces Cross-Cloud Cold and Hot Migration
  • Delivers a new capability that is key for the hybrid cloud, called Per-VM EVC

More information here: Introducing VMware vSphere 6.7 / VMware Blogs

vSAN 6.7

  • vSAN 6.7 provides intuitive operations that align with other VMware products from a UI and workflow perspective to provide a “one team, one tool” experience
  • Iintroduces a new HTML5 UI based on the “Clarity” framework as seen in other VMware products (All products in the VMware portfolio are moving toward this UI framework)
  • A new feature known as “vRealize Operations within vCenter” provides an easy way for customers to see vRealize intelligence directly in the vSphere Client
  • vSAN 6.7 now expands the flexibility of the vSAN iSCSI service to support Windows Server Failover Clusters (WSFC)
  • vSAN 6.7 introduces an all-new Adaptive Resync feature to ensure a fair-share of resources are available for VM I/Os and Resync I/Os during dynamic changes in load on the system
  • Optimizes the de-staging mechanism, resulting in data that “drains” more quickly from the write buffer to the capacity tier.  The ability to de-stage this data quickly allows the cache tier to accept new I/O, which reduces or eliminates periods of congestion
  • New health checks include:
    • Maintenance mode verification ensures proper decommission state
    • Consistent configuration verification for advanced settings
    • vSAN and vMotion network connectivity checks improved
    • Improved vSAN Health service installation check
    • Improved physical disk health check combines multiple checks (software, physical, metadata) into a single notification
    • Firmware check is independent from driver check

More information here: What’s New with VMware vSAN 6.7 / VMware Blogs and also here: Extending Hybrid Cloud Leadership with vSAN 6.7

vCenter Server 6.7

  • The vSphere Client (HTML5) is full of new workflows and closer to feature parity
  • built-in file-based vCenter Server backup now includes a scheduler

Installation

  • No load balancer required for high availability and fully supports native vCenter Server High Availability.
  • SSO Site boundary removal provides flexibility of placement.
  • Supports vSphere scale maximums.
  • Allows for 15 deployments in a vSphere Single Sign-On Domain.
  • Reduces the number of nodes to manage and maintain.

Migration

  • vSphere 6.7 is also the last release to include vCenter Server for Windows, which has been deprecated.
  • migrate to the vCenter Server Appliance with the built-in Migration Tool
  • Deploy & import all data
  • Deploy & import data in the background
  • Customers will also get an estimated time of how long each option will take when migrating

Upgrading

  • vSphere 6.7. will support upgrades and migrations only from vSphere 6.0 or 6.5
  • vSphere 5.5 does not have a direct upgrade path to vSphere 6.7
  • Upgrade path: vSphere 5.5 to vSphere 6.0 or 6.5, and then to vSphere 6.7
  • vCenter Server 6.0 or 6.5 managing ESXi 5.5 hosts cannot be upgraded or migrated until the hosts have been upgraded to at least ESXi 6.0
  • Reminder: end of general support for vSphere 5.5 is September 19, 2018.

Monitoring and Management

  • vSphere Appliance Management Interface (VAMI) on port 5480 has received an update to the Clarity UI
  • There is now a tab dedicated to monitoring. Here you can see CPU, memory, network, database and disk utilization.
  • Another new tab called Services is also within the VAMI, giving the option to start, stop, and restart vCenter Server services if needed
  • vSphere 6.7 also marks the final release of the vSphere Web Client (Flash). Some of the newer workflows in the updated vSphere HTML5 Client release include:
    • vSphere Update Manager
    • Content Library
    • vSAN
    • Storage Policies
    • Host Profiles
    • vDS Topology Diagram
    • Licensing

More information here: Introducing vCenter Server 6.7 / VMware Blogs

vSphere with Operations Management 6.7

  • new plugin for the vSphere Client. This plugin is available out-of-the-box and provides some great new functionality
  • When interacting with this plugin, you will be greeted with 6 vRealize Operations Manager (vROps) dashboards directly in the vSphere client
  • overview, cluster view, and alerts for both vCenter and vSAN views
  • The new Quick Start page is making it easier to get directly to the data you need to
  • four use cases: Optimize Performance, Optimize Capacity, Troubleshoot, and Manage Configuration
  • The Workload Optimization dashboard was updated. Workload Optimization takes predictive analytics and uses them in conjunction with vSphere Distributed Resource Scheduler (DRS) to move workloads between clusters. New with vROps 6.7, you can now fine tune the configuration for workload optimization
  • vROps 6.7 introduced a completely new capacity engine that is smarter and much faster

More information here: vSphere with Operations Management 6.7 / VMware Blogs

vSphere 6.7 Security

  • TPM 2.0 support for ESXi
  • Virtual TPM 2.0 for VMs
  • Support for Microsoft Virtualization Based Security
  • UI updates (combined all encryption functions (VM Encryption, vMotion Encryption) into one panel in VM Options)
  • Multiple SYSLOG targets
  • FIPS 140-2 validated cryptographic modules – by default!

More information here: vSphere 6.7 Security / VMware Blogs

Developer and Automation Interfaces for vSphere 6.7

  • Added functionality to existing APIs in vSphere 6.7
  • Coverage of new areas
  • Appliance API updates: from prechecks to staging to installation and validation, it’s all available by API now
  • vCenter API updates: new APIs have been added to interact with the VM’s guest operating system (OS), viewing Storage Policy Based Management (SPBM) policies, and managing vCenter server services
  • also a handful of new APIs to handle the deployment and lifecycle of the vCenter server
  • a handful of updates to the vSphere Web Services (SOAP) APIs as well

More information here: Developer and Automation Interfaces for vSphere 6.7 / VMware Blogs

Faster Lifecycle Management Operations in VMware vSphere 6.7

  • brand-new Update Manager interface which is now part of the HTML5 Client
  • Update Manager in vSphere 6.7 keeps VMware ESXi 6.0 to 6.7 hosts reliable and secure
  • the new UI provides a much more streamlined remediation process, requiring just a few clicks to begin the procedure. It’s not just a port from the old Flash client
  • Hosts that are currently on ESXi 6.5 will be upgraded to 6.7 significantly faster than ever before
  • Several optimizations have been made for that upgrade path, including eliminating one of two reboots traditionally required for a host upgrade
  • Quick Boot eliminates the time-consuming hardware initialization phase by shutting down ESXi in an orderly manner and then immediately re-starting it

More information here: Faster Lifecycle Management Operations in VMware vSphere 6.7 / VMware Blogs

vSphere 6.7 for Enterprise Applications

  • include support for Persistent Memory (PMEM) and enhanced support for Remote Directory Memory Access (RDMA)
  • PMEM is a new layer called Non-Volatile Memory (NVM) and sits between NAND flash and DRAM, providing faster performance relative to NAND flash but also providing the non-volatility not typically found in traditional memory offerings
  • new protocol support for Remote Direct memory Access (RDMA) over Converged Ethernet, or RoCE (pronounced “rocky”) v2, a new software Fiber Channel over Ethernet (FCoE) adapter, and iSCSI Extension for RDMA (iSER)

More information here: vSphere 6.7 for Enterprise Applications / VMware Blogs