No vMotion possible after ESXi host BIOS update

I was working on some ESXi upgrades recently. We’re currently preparing everything to make the upgrade to vSphere 7 somewhen smooth as silk. That means that we’re rolling out vSphere 6.7 on all of our systems. Recently, I was tasked to upgrade some hosts in a facility some hundred miles away. The task itself was super easy, managing that with vSphere Update Manager was working like a charm. But before the vSphere upgrade, I had to upgrade the BIOS and server firmware to make sure that we’re fine with the VMware HCL.

The second host was done within one hour and received the complete care package. But the first host took a bit longer due to unforeseen troubleshooting. I’d like to share some helpful tips (hopefully they’re helpful).

What happened?

As mentioned, upgrading the ESXi host through the vSphere Update Manager worked like a charm. But before that, I booted the server remotely with the Service Pack for ProLiant ISO image to upgrade the BIOS and firmware of that server. Also, that went very well and. As there are two ESXi hosts at this location, we had shared storage available and we were able to move the VMs from one host to the other without further issues. One host placed into maintenance mode, upgrade, remove from maintenance mode, and the same for the second server. That was the idea.

But unfortunately, the gods of IT had something different in mind. After upgrading the first host, we tried to move the VMs back to this host to prepare the upgrade for the second host. Well, some VMs were able to be moved there, some were not. But why?

When we move some particular VMs back from the second to the upgraded host, we received the below error:

That made us curious. Why did this happen? When checking the tasks and events of that host and also the affected VM in vCenter, we didn’t find much information. After some internet research, we’ve found some possible causes. But not all of them didn’t fit our issue. So we had to dig a bit deeper. Thanks to the vmware.log file, located in the VM folder, we were able to find out the following:

Ok, that sounds funny that VMX has left the building, but not sure why. Seems to be a boring party…

Some more digging brought some more helpful information:

Obviously, the vMotion failed because some CPU features are different between the first (the already updated) and the second host. But wait, the two servers are the same model, with the same hardware configuration? How can that be?

The solution

That led us to the conclusion that it must be something with the VM compatibility level. But wait again, some movable VMs were on VM HW version 8, and the VMs with failed vMotion were also on VM HW version 8? Well, to say it here, we weren’t able to find the exact differences here. But that led us to two solutions. Either upgrade the VM HW version or install an ESXi patch. We decided to install the patch as we didn’t want to reboot some VMs (but we did it later).

And before you complain now, yes, I’m aware of the fact that the patch in the linked KB article is not the most recent ESXi build. It’s somewhat historical. When we started with the global vSphere 6.7 rollout, vSphere 6.7 Update 2 was the latest version available. And yes, we’re currently again planning new rollouts, as it was sadly┬áneglected in the past. But you know, time and human resources, and change requests…

My homelab hardware gets its own rack

This project started a long time ago. When I planned the hardware needs for my homelab, I also thought of getting a rack. I had a real IT rack in mind, as you know it from your daily business, maybe back in the days when at least some stuff was on-premises and not everything in the cloud. I wanted to get a small rack with enough space to mount my whole homelab hardware into it, to have a proper cabling solution, and to have flexibility in case my homelab gets an extension.

But that wasn’t easy. There are various flavors of racks. The normal 42 unit IT rack, half-hight racks, and also various wall-mountable racks for patch panels, switches, and smaller devices. I was thinking and tinkering, looking for specs. But in the end, nothing satisfied me. Well, at least not from a price perspective, of being not able to transport it. And then, there was something going on on Twitter:

Thanks to my colleague Michael Schroeder I’ve found something. He mentioned his IKEA rack, and that made me curious. Earlier in June, my colleague Fred Hofer announced that he moved his hardware into a bigger rack and that it was easier as when he moved from an IKEA Lack rack to the small rack:

And that was the trigger! Why not building my own rack and tailor it to my needs? I don’t have to spend much money on a real IT rack, and I can do something handcrafted. The rack didn’t have to be anything special, there was not much in my personal specification book.

That’s the specifications planned:

  • Small (not full 42 rack units)
  • It should be lightweight
  • Enough space for at least three servers, some switches, and a NAS (or two)
  • Enough space for future homelab upgrades
  • Extensible, if needed
  • Should withstand some weight
  • Wheels!

The idea of building my own IKEA Lack Rack was born.

This whole homelab IKEA Lack Rack story will be covered in a small blog series. This blog post will start the series with some planning stuff, the first pictures, and the BOM, as far as I can provide it already. At least the BOM will be updated if there is a reason for it.

Read more

An easy way to quickly migrate a VMware VM to Synology VMM

When it comes to virtualization, I’m working with VMware products in my homelab, alongside (hardware) products from other manufacturers. But some special circumstances made a special solution to a problem necessary. Due to a month of military duty, when I was at home only for the weekend, I shut down my homelab. Not also due to this fact, but also because I’m currently building my own customized rack, where I will install my homelab hardware. Be sure to check my blog frequently to get more information about the rack, as I will blog about it soon!

What’s the reason for this migration?

I’m using Ubiquiti hardware for my networking (lab switches, home networking, including wireless), and also a Pi-Hole as my ad-blocker. These are the only “business-critical” services in my home network. And they were running on my homelab. But what should I do when I shut down everything? Well, VMware Workstation to the rescue! I’m (actually, I was) running an ESXi on VMware Workstation on my gaming computer. This ESXi server was managed with vCenter as a replication target for Veeam Backup and Replication. Quickly migrate the VMs to that virtual ESXi host, and that’s it. But what when I accidentally shut down this PC? Or I want to shut it down? I need another solution which is more like 24/7!

What’s the solution?

That made me think about Synology. I knew that at least some Synology NAS systems can run virtual workloads directly, either as a virtual machine or within Docker. I didn’t want to go with Docker because of the lack of knowledge, and I have only limited system resources on that NAS box. So it will be two VMs running on my Synology box! But how?

You can’t just vmotion your VMware VM to Synology VMM (Virtual Machine Manager). You can export the VMDK files or create an OVF, which you then import into Synology VMM. But that took to long, somehow (in certain circumstances I can be impatient …).

This blog post will show you how you can easily backup your VMware VMs to a Synology box, with their own toolset, and restore it directly into Synology VMM. It might come in handy, in case you’re searching also for a nifty solution to run a Pi-Hole or a Ubiquiti controller. Or some other small VMs.

To be honest, the Synology box isn’t a Ferrari, or a Fright Liner in terms of performance and / or capacity. Such a NAS is always somehow limited in CPU resources and memory. In my case, I was happy that I maxed-out the memory when I initially bought the NAS box. My current NAS looks like this:

You can see, there are not many resources, but it should be fine for some tiny Linux VM. A domain controller can even run on it if the resources are used sparingly. But don’t expect too much… And let’s dive into the topic now.

Read more

My website just got an update – speed and design

“A long long time ago, I can still remember how…”

You all know that song. It’s now two years ago when I moved my website the last time from one provider to another. And no, this blog post doesn’t talk about another move. It’s just a small update on how my website is performing and what I did the last few days and weeks to make it perform and look better.

Back in April 2018, I published a blog post about my website now being serverless. The reason why I wanted to go serverless was website performance. I stumbled across some Tweets, talking about the search functionality on a website, not using a word or tag cloud, etc. All of this has led to the fact that I have dealt with the topic more intensively and at that time moved my website to a new hosting provider. In the end, I decided to go serverless with my website. But that wasn’t easy. I love WordPress as a blog tool or publishing platform, or whatever you would call it. It is easy, flexible, and you can do so many things with WordPress.

But WordPress is based on PHP for the frontend and MySQL as the backend database. And that’s all dynamic content. Each blog post you read, every function on the website will be executed or rendered dynamically. That’s not speaking for high performance directly. There are some techniques, such as caching plugins, or other tweaking tools, to make the website performing better. But it’s still dynamic content in the end.

Read more

Working with templates in vSphere 7

One new great feature in vSphere 7 is template versioning. You heard that maybe already somewhere, or read it on various blog posts shortly after the announcement of vSphere 7.

I recently had to restore some of my Windows templates because something went wrong. Then I said, why not try out the new template versioning? Well, it’s easier said than done. I’ve found out that working with templates in vSphere 7 isn’t much of a difference than it is in vSphere 6.x. It’s also not a big difference when working with content libraries. But there are still some differences and maybe even limitations. I’ll update this post when I find out more about this. Maybe I’m just doing it wrong, or it is a bug like the one where VMs and Templates view doesn’t show folders after two levels in vSphere client (KB 78693).

What is vSphere template versioning?

First, it’s a great feature! With vSphere 7, you can now have multiple versions of a template. For example, you create your base template, then there’s the next version when you install patches and updates, and so on. If there’s something wrong, you can revert to the previous version of your template. Also, if you’ve got a huge template chain already, you can delete the oldest versions of your template. In my humble opinion, there is some space for improvement when working with templates and versioning. But I’ll show you later what that means.

How to work with the new versioning?

As far as I have tested it, you can’t convert existing templates to a template with versioning enabled. I mean there is no button like “convert that template”. It’s a manual task. That counts for templates that are stored somewhere on a datastore connected to your ESXi host as well as for templates that are stored in the content library. But as I already mentioned, maybe I’m just doing it wrong (hopefully not). And in case this should work, I’ll find it out and update this post.

How to work with template versioning then? It’s pretty easy. You set up your virtual machine and do everything you need to prepare it as your VM template. Michael White has some great posts about creating VM templates. Let’s assume that you’ve got your VM ready for the next steps.

Read more