My team and I were tasked with a global vSphere upgrade on all of our ESXi hosts, hyper-converged systems and our vCenter. We took enough time to get the inventory, check all the hosts for compatibility and test the various upgrade paths. The upgrade will be rolled out in multiple steps due to personal resources (we’re a small team and currently, it’s summer holiday season) and also to avoid too much downtime. In this blog post, I’d like to share some personal experiences regarding the upgrade of our vCenter. It didn’t work as we’ve planned. But in the end, all worked fine. I’d like also to shoutout a big thank you to my team. You guys rock!
Foreword
Before we dive deeply into the vCenter upgrade process, and what happened, I’d like to explain some steps first to better understand our approach and the upgrade process in general.
One of the milestones is (at the writing of this blog post already “was”) the upgrade of our vCenter. We’re using vCenter for our daily tasks like managing virtual workloads, deployment of new ESXi hosts, etc. But before we could upgrade our vCenter from 6.5 to 6.7, we had to do some host upgrades first. Our hyper-converged infrastructure was running 24/7 without getting much care, like care in the form of firmware upgrades. There was just not enough time to do maintenance tasks like this throughout the last few months or maybe years. Maybe some people also were just afraid of touching these systems, I don’t know for sure. The firmware was old but at least the hypervisor was on a 6.0 version and also in pretty good shape as well.
So we’ve scheduled various maintenance windows, planned the hyper-converged upgrades and made sure that we’ve downloaded everything from the manufacturer we need to succeed. The firmware upgrade went well on all hosts. One host had a full SEL log and that caused some error messages. No real issue at all, but some alerts in vCenter on that cluster we had to get rid of.
The firmware upgrade on one of the hyper-converged cluster took about 18 hours. That was expected, somehow, because the firmware was really old, and did not support higher ESXi versions that 6.0. But everything went well and we had no issues at all, expect the full SEL log which then has been cleared.
After that firmware upgrade, we were able to upgrade the ESXi version on all of the hyper-converged clusters to a 6.5 level. This was needed because of some plugins used to manage these hyper-converged systems. Ok, to let the cat out of the bag, we’re using Cisco HyperFlex and the plugin I’m talking about is that HX plugin. The version for ESXi 6.0 wasn’t supported in vCenter 6.7. That’s the reason we had to upgrade the HyperFlex systems first to ESXi 6.5.
As you know for sure, you can’t manage ESXi hosts later than 6.5 in vCenter 6.5. So we had to do a stop here for the moment, but we were now at least able to upgrade our vCenter. All other hosts were already on 6.0 since they were installed, so no issues upgrading to vCenter 6.7.
Oh, did I already mention that our vCenter doesn’t run on-premises but on a cloud provider? No, it’s not VMC on AWS, but some other IaaS provider. That didn’t make it easier.
But let’s dive into the main topic now, enough of explanation, let’s do the hard work now.