Backup and Restore vCenter Server Appliance

Just a few weeks ago, vSphere 7 saw the light of day. And people went crazy! New ESXi servers with vSphere 7 have sprung up like mushrooms. So many people directly upgraded their homelabs, or maybe even their production systems.

This blog post, I know the last post is some time ago, will show you how you can backup your vCenter Server Appliance with their integrated backup functionality, and also how you can restore it, in case something went wrong. Except for two ways, I went through all options for backup targets and tried to find out how to configure it. So there should be at least one way how you can back up your vCenter data to a proper location in your data center.

Why is it a good idea to back up your vCenter

vCenter is your management central in terms of virtualization. You manage all your ESXi server with it, your clusters, your data center networking maybe (with NSX), you’ve got some automation running, got your host profiles, storage policies, etc. in place there. Why lose all the stuff you’ve configured over a longer period, with maybe much tinkering, try and error? Backing up vCenter is not so hard. You need a backup target, a user and a password. In vCenter 6.7 you can even schedule the backup, which makes things even easier than before, where it wasn’t possible to configure a schedule.

Supported protocols for backup

vCenter supports the following protocols for backup:

  • FTP
  • NFS
  • SMB
  • FTPS
  • SCP
  • HTTP
  • HTTPS

This guide will show you how to configure all of the above protocols, except HTTP and HTTPS. I didn’t see a sense in setting up such a configuration in my lab, because it doesn’t seem to me as such backup targets would exist in a company either. I also think that these two protocols might be the slowest, compared to all the other protocols available. In data centers, no matter if on-premises or cloud, the often-used protocols are NFS and SMB. So chances are high that there might be already a suitable backup target for vCenter. Or it can be easily created. Also, FTP is still commonly used, and we’ve got also secure options with FTPS and SCP.

Backup performance

To be honest, the backup performance was not my top priority. I wanted to configure and test all supported protocols except HTTP and HTTPS. It’s clear that performance matters, at least to a certain degree. Backup windows might be small, or systems should not be impacted with a heavy load. Before we move on, I’d like to show you how the performance was during my tests.

I’ve set up a new vCenter server appliance for this backup and restore test. It is a tiny deployment with 2 CPU, 10GB memory and default disk size (thin). There is nothing configured within vCenter, no hosts, no clusters, nothing except backup. You can see that the amount of data transferred is the same in all tests. In regards to the duration, we’ve got the SMB protocol on the first place, followed by FTPS on the second, and NFS on the third place. Yes, I’m aware of “but there’s ftp:// and not ftps://”. I’ve configured FTPS as you can see later on the screenshots, but when I executed the backup job, it was logged as “ftp”. You can spot the difference at the port used for FTPS.

Read more

New homelab hardware arrived!

Some weeks and months ago the gathering started. I did some long research, read blog posts and found very much helpful stuff. As you can read on my homelab page here, my lab evolved. It all started with VMware Workstation, then I recycled my old gaming rig, I’ve added some real servers and storage, and now, today, I’m announcing the arrival of totally brand-new and shiny homelab hardware!

With this blog post, I’m starting a small series featuring my new homelab. In this very first post, you’ll get the BOM (Bill of Material), so you know exactly what happened. In the next posts, I’ll show you how I’ve set it all up and for what I’m using it.

Basic idea

Instead of having huge servers to heat the basement, I’ve planned to reduce my own data center footprint as much as possible. Ideally, everything related to my homelab should fit into a small 19-inch rack. A really small rack. This rack will be placed in my home office. Also, I want to run an all-flash VMware vSAN cluster with three nodes. I don’t want only two hosts and a witness appliance, even if it works and it is a fully supported concept for small- or branch offices. I want real beef. Each server should have one cache device and at least one SSD for the capacity tier. I went all-in and decided to go with two SSDs for capacity. All servers have to be connected with 10Gig SFP+ for vSAN and vMotion because I already own a 10Gig SFP+ switch (which wasn’t much used until yet). And all three servers should run as silent as possible. Sure, I’ve got headphones for gaming. But when the fans are constantly buzzing around and making noise, it’s not nice. And I’m

To conclude this:

  • Small data center footprint
  • Three node all-flash vSAN cluster
  • 10Gig SFP+ connectivity
  • Small form factor 19-inch rack
  • Silent operations because of home office placement

That’s pretty much it.

For what I’m going to use it?

First, I love hardware! But I’m not buying hardware just for the sake of buying it. I learn new stuff because I didn’t have much to do with Supermicro except reading about it. I’ll install all the vSphere stuff I currently have running, and maybe something more. All that for learning how things work and for my exam preparations. Yes, I don’t have a VCP yet. I tried it several times but failed miserably. But not the next time, for sure! Maybe I’m gonna put also some “production” stuff onto it, like my Pi-Hole (reverse DNS add filter) or my Ubiquiti controller. We will see.

Read more

Upgrade VCSA through CLI Installer

My team and I were tasked with a global vSphere upgrade on all of our ESXi hosts, hyper-converged systems and our vCenter. We took enough time to get the inventory, check all the hosts for compatibility and test the various upgrade paths. The upgrade will be rolled out in multiple steps due to personal resources (we’re a small team and currently, it’s summer holiday season) and also to avoid too much downtime. In this blog post, I’d like to share some personal experiences regarding the upgrade of our vCenter. It didn’t work as we’ve planned. But in the end, all worked fine. I’d like also to shoutout a big thank you to my team. You guys rock!

Foreword

Before we dive deeply into the vCenter upgrade process, and what happened, I’d like to explain some steps first to better understand our approach and the upgrade process in general.

One of the milestones is (at the writing of this blog post already “was”) the upgrade of our vCenter. We’re using vCenter for our daily tasks like managing virtual workloads, deployment of new ESXi hosts, etc. But before we could upgrade our vCenter from 6.5 to 6.7, we had to do some host upgrades first. Our hyper-converged infrastructure was running 24/7 without getting much care, like care in the form of firmware upgrades. There was just not enough time to do maintenance tasks like this throughout the last few months or maybe years. Maybe some people also were just afraid of touching these systems, I don’t know for sure. The firmware was old but at least the hypervisor was on a 6.0 version and also in pretty good shape as well.

So we’ve scheduled various maintenance windows, planned the hyper-converged upgrades and made sure that we’ve downloaded everything from the manufacturer we need to succeed. The firmware upgrade went well on all hosts. One host had a full SEL log and that caused some error messages. No real issue at all, but some alerts in vCenter on that cluster we had to get rid of.

The firmware upgrade on one of the hyper-converged cluster took about 18 hours. That was expected, somehow, because the firmware was really old, and did not support higher ESXi versions that 6.0. But everything went well and we had no issues at all, expect the full SEL log which then has been cleared.

After that firmware upgrade, we were able to upgrade the ESXi version on all of the hyper-converged clusters to a 6.5 level. This was needed because of some plugins used to manage these hyper-converged systems. Ok, to let the cat out of the bag, we’re using Cisco HyperFlex and the plugin I’m talking about is that HX plugin. The version for ESXi 6.0 wasn’t supported in vCenter 6.7. That’s the reason we had to upgrade the HyperFlex systems first to ESXi 6.5.

As you know for sure, you can’t manage ESXi hosts later than 6.5 in vCenter 6.5. So we had to do a stop here for the moment, but we were now at least able to upgrade our vCenter. All other hosts were already on 6.0 since they were installed, so no issues upgrading to vCenter 6.7.

Oh, did I already mention that our vCenter doesn’t run on-premises but on a cloud provider? No, it’s not VMC on AWS, but some other IaaS provider. That didn’t make it easier.

But let’s dive into the main topic now, enough of explanation, let’s do the hard work now.

Read more

Expand your logical drive to extend a VMFS datastore

Recently, I had to add some hardware to an HPE ProLiant DL380 server. Ok, it wasn’t me because this server runs in a location about 3600 miles away from me. But the engineers on-site completed this task. The engineers added some memory and more disks to the server. My task was to add the newly installed disks to the existing ESXi datastore. This was (still is) a standalone ESXi server, as we say a black box server. It’s a standard HPE ProLiant server with local disks and an SD-Card as ESXi boot disk and it is centrally managed in a vCenter. Local IT persons have limited access to vCenter just to manage their workloads on that specific ESXi black box. There isn’t running much on these black boxes, most of all an SCCM distribution point because lacking enough bandwidth. But anyway, that’s not the topic here.

I want to show you what steps I’ve missed in the first attempt and how I’ve managed to fix it.

As there is not much running on this black box server, it was easy to schedule a maintenance window to shut down the workloads and also the ESXi server, so the engineers onsite were able to install the hardware (memory and disks). Through the iLO interface, I’ve started the server and accessed the Smart Storage Administrator, which is part of the Intelligent Provisioning tool kit on servers of Gen9 and later. It was easy to add the unassigned disk to the already existing RAID array. It took some hours to rebuild all the data because all data had to be redistributed over all disks, with parity and everything needed.

After the server was up and running again, I tried to increase the VMFS datastore capacity. It didn’t work as expected. I didn’t see any device nor LUN which I could extend. That made me curious.

Well then, back to the drawing board…

It wasn’t easy this time to schedule a maintenance window, but I’ve asked the responsible person if he could suggest one. In the meantime, I was digging through the internet to find out what’s wrong or what I’ve missed. I’ve found out that just adding the new disks to the existing RAID doesn’t solve that issue alone. I also had to expand the logical drive. That was the key! So ok, could this be done without another downtime? Thankfully yes!

But before we go deeper here, please, always take a backup of your workloads first. Just in case. Better safe than sorry!

It is indeed possible to expand the logical drive on an HPE ProLiant server without downtime. I’m talking here of a maybe easy, not so complex task. It’s not like I’m going to create new arrays or change the RAID mode. No, just expanding the logical drive.

First, I connected to that ESXi server with SSH to see if the HPE tools were installed. And they were. It’s highly recommended that you use the custom VMware ESXi ISO image to install your server when they come from a vendor like HPE or DELL. These images include all the necessary drivers for your hardware, like network or storage controller, and most of all, they include also some nifty tools as well.

In my case, I’m using the tool “hpssacli”. This tool is just the command line version of the Smart Storage Administrator (HP SSA CLI => Smart Storage Administrator CLI). Nice, isn’t it? 😉

Take me to the CLI, please!

I’ve needed only a few commands to get the things in order. Let’s go into it!

First, I’ve checked the logical drives on the controller:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld all show status

Mostly, the controller is installed in slot 0. I’m talking here about the P440ar which is on-board when I’m not wrong, so definitely on slot 0. With “ld all” it will display all logical drives configured on that controller.

Next, as I’ve got now the logical drive ID, I’ve checked the details for that logical drive:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 1 show

This gave me an overview of that logical drive and I saw that it was just half of the expected size because disks have been added here.

Just to make sure, I’ve checked the controller to see if all disk drives were assigned:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0  show config

The output showed me that all installed drives were assigned and that there were no unassigned drives. As this controller only had one logical drive, all disk drives were assigned there.

Ok, so then it should be possible to extend the logical drive. And it is, with the following command:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 1 modify size=max forced

This command extends the logical drive, so it is going to use the whole disk space as defined in the RAID.

After that, I was able to increase the VMFS datastore capacity through the vSphere web client without problems.

Enable Flash Player in Google Chrome 69

Some weeks ago Google released the newest version of their Chrome browser, version 69. I thought yeah, get it and update it. I really like Google Chrome because it supports all the websites I’m visiting often (or at least did, but more on that later) and it’s fast. You can also customize it with plugins like mouse gestures or so to customize it for your needs. What I didn’t know is that Google switched off the support for Flash Player as far as I knew it from the older versions. You aren’t able anymore to add websites to the allow or deny list in the Flash settings within the browser. Or at least you can’t add the websites directly on that list in the settings. There are some more clicks to do. Officially announced was the end of Flash Player by Adobe. By the year 2020, Flash Player won’t exist anymore, won’t be supported by Adobe nor by the most used browser software. But there’s a “but”. You can manually add specific websites to the allow or block list of Google Chrome, but not in the way you might know. But why the heck should I use Flash anyway? All my favorite websites are already HTML5 compatible and all stuff works without that crappy Flash plugin! But wait! Do you use the VMware vCenter browser client? Probably the Flex Client because you still have the need for it, like vSAN, Update Manager, or 3rd party plugins of different software and hardware vendors within vCenter? Then you’ll have the same issues as I had. The vCenter Flex Client (aka Flash client) obviously won’t work anymore without Flash. Yes, I know, you don’t need to use the Flex Client for vSAN or the Update Manager because in vCenter 6.7 it is finally available. But what about the 3rd party plugins? There is a lot of stuff out there you probably need, I don’t know your infrastructure. I can only compare with mine. But I can tell you, enabling Flash Player in Google Chrome, even in the most recenter version 69 is easier than you think. It’s just some steps and clicks, no rocket science!

Read more