Veeam Vanguard Summit 2019 Prague – Day 1 Recap

Welcome Vanguards!

Nikola Pejkova and Rick Vanover welcome all new and renewed Vanguards here at Prague. As we know it from last year, and maybe also from other conferences, there are the famous traffic light signals. In the following days we will see and hear much of green, yellow and red stuff, which means that they are free to publish, embargoed (publish it somewhen) or even under NDA (don’t even think to publish it or talk about it).

Rick and Nikola explained some new things about the Vanguard program to us. Veeam likes to have more engagement with the Vanguards. There will be more recap movies like they did at VeeamON, helping to build up the profile of a Vanguard, not promoting Veeam especially. Rick guides us through the agenda. There are two rooms this year, with even more sessions for all kinds of tastes of technology and interests. For the official Vanguard Dinner on Tuesday we will have our own Veeam shuttles from the hotel to the venue and back. Awesome!

Rick shares some more program updates and priorities. Such an event, and also the Vanguard program is a great opportunity to get more information directly from the source. As a Vanguard, you’re also able to give your feedback directly to the responsible persons, like feedback for beta versions, or the program itself. These persons can push it into the right channels to make it probably happen.

Read moreVeeam Vanguard Summit 2019 Prague – Day 1 Recap

VeeamON Virtual – The Premier Virtual Conference for Cloud data Management

This year, Veeam is hosting a virtual conference for Cloud Data Management. Well, not just a virtual conference, but the Premier Virtual Conference for Cloud Data Management. The VeeamON Virtual Conference on November 20 this year is fully packed with sessions about the Vision & Strategy, and you can learn some Implementation Best Practices. You’ll get also vital insights about Cloud-Powered stuff like Office365 backup, Backup as a Service and Disaster Recovery as a Service, as well as valuable information about Architecture & Design of the backup solution.

And I’m happy to be part of the expert lounge during this virtual event. Feel free to stop by, say hello and ask your questions! All you have to do is to visit the VeeamON Virtual website here, register and join the conference on November 20, 2019.

Upgrade VCSA through CLI Installer

My team and I were tasked with a global vSphere upgrade on all of our ESXi hosts, hyper-converged systems and our vCenter. We took enough time to get the inventory, check all the hosts for compatibility and test the various upgrade paths. The upgrade will be rolled out in multiple steps due to personal resources (we’re a small team and currently, it’s summer holiday season) and also to avoid too much downtime. In this blog post, I’d like to share some personal experiences regarding the upgrade of our vCenter. It didn’t work as we’ve planned. But in the end, all worked fine. I’d like also to shoutout a big thank you to my team. You guys rock!

Foreword

Before we dive deeply into the vCenter upgrade process, and what happened, I’d like to explain some steps first to better understand our approach and the upgrade process in general.

One of the milestones is (at the writing of this blog post already “was”) the upgrade of our vCenter. We’re using vCenter for our daily tasks like managing virtual workloads, deployment of new ESXi hosts, etc. But before we could upgrade our vCenter from 6.5 to 6.7, we had to do some host upgrades first. Our hyper-converged infrastructure was running 24/7 without getting much care, like care in the form of firmware upgrades. There was just not enough time to do maintenance tasks like this throughout the last few months or maybe years. Maybe some people also were just afraid of touching these systems, I don’t know for sure. The firmware was old but at least the hypervisor was on a 6.0 version and also in pretty good shape as well.

So we’ve scheduled various maintenance windows, planned the hyper-converged upgrades and made sure that we’ve downloaded everything from the manufacturer we need to succeed. The firmware upgrade went well on all hosts. One host had a full SEL log and that caused some error messages. No real issue at all, but some alerts in vCenter on that cluster we had to get rid of.

The firmware upgrade on one of the hyper-converged cluster took about 18 hours. That was expected, somehow, because the firmware was really old, and did not support higher ESXi versions that 6.0. But everything went well and we had no issues at all, expect the full SEL log which then has been cleared.

After that firmware upgrade, we were able to upgrade the ESXi version on all of the hyper-converged clusters to a 6.5 level. This was needed because of some plugins used to manage these hyper-converged systems. Ok, to let the cat out of the bag, we’re using Cisco HyperFlex and the plugin I’m talking about is that HX plugin. The version for ESXi 6.0 wasn’t supported in vCenter 6.7. That’s the reason we had to upgrade the HyperFlex systems first to ESXi 6.5.

As you know for sure, you can’t manage ESXi hosts later than 6.5 in vCenter 6.5. So we had to do a stop here for the moment, but we were now at least able to upgrade our vCenter. All other hosts were already on 6.0 since they were installed, so no issues upgrading to vCenter 6.7.

Oh, did I already mention that our vCenter doesn’t run on-premises but on a cloud provider? No, it’s not VMC on AWS, but some other IaaS provider. That didn’t make it easier.

But let’s dive into the main topic now, enough of explanation, let’s do the hard work now.

Read moreUpgrade VCSA through CLI Installer

Expand your logical drive to extend a VMFS datastore

Recently, I had to add some hardware to an HPE ProLiant DL380 server. Ok, it wasn’t me because this server runs in a location about 3600 miles away from me. But the engineers on-site completed this task. The engineers added some memory and more disks to the server. My task was to add the newly installed disks to the existing ESXi datastore. This was (still is) a standalone ESXi server, as we say a black box server. It’s a standard HPE ProLiant server with local disks and an SD-Card as ESXi boot disk and it is centrally managed in a vCenter. Local IT persons have limited access to vCenter just to manage their workloads on that specific ESXi black box. There isn’t running much on these black boxes, most of all an SCCM distribution point because lacking enough bandwidth. But anyway, that’s not the topic here.

I want to show you what steps I’ve missed in the first attempt and how I’ve managed to fix it.

As there is not much running on this black box server, it was easy to schedule a maintenance window to shut down the workloads and also the ESXi server, so the engineers onsite were able to install the hardware (memory and disks). Through the iLO interface, I’ve started the server and accessed the Smart Storage Administrator, which is part of the Intelligent Provisioning tool kit on servers of Gen9 and later. It was easy to add the unassigned disk to the already existing RAID array. It took some hours to rebuild all the data because all data had to be redistributed over all disks, with parity and everything needed.

After the server was up and running again, I tried to increase the VMFS datastore capacity. It didn’t work as expected. I didn’t see any device nor LUN which I could extend. That made me curious.

Well then, back to the drawing board…

It wasn’t easy this time to schedule a maintenance window, but I’ve asked the responsible person if he could suggest one. In the meantime, I was digging through the internet to find out what’s wrong or what I’ve missed. I’ve found out that just adding the new disks to the existing RAID doesn’t solve that issue alone. I also had to expand the logical drive. That was the key! So ok, could this be done without another downtime? Thankfully yes!

But before we go deeper here, please, always take a backup of your workloads first. Just in case. Better safe than sorry!

It is indeed possible to expand the logical drive on an HPE ProLiant server without downtime. I’m talking here of a maybe easy, not so complex task. It’s not like I’m going to create new arrays or change the RAID mode. No, just expanding the logical drive.

First, I connected to that ESXi server with SSH to see if the HPE tools were installed. And they were. It’s highly recommended that you use the custom VMware ESXi ISO image to install your server when they come from a vendor like HPE or DELL. These images include all the necessary drivers for your hardware, like network or storage controller, and most of all, they include also some nifty tools as well.

In my case, I’m using the tool “hpssacli”. This tool is just the command line version of the Smart Storage Administrator (HP SSA CLI => Smart Storage Administrator CLI). Nice, isn’t it? 😉

Take me to the CLI, please!

I’ve needed only a few commands to get the things in order. Let’s go into it!

First, I’ve checked the logical drives on the controller:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld all show status

Mostly, the controller is installed in slot 0. I’m talking here about the P440ar which is on-board when I’m not wrong, so definitely on slot 0. With “ld all” it will display all logical drives configured on that controller.

Next, as I’ve got now the logical drive ID, I’ve checked the details for that logical drive:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 1 show

This gave me an overview of that logical drive and I saw that it was just half of the expected size because disks have been added here.

Just to make sure, I’ve checked the controller to see if all disk drives were assigned:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0  show config

The output showed me that all installed drives were assigned and that there were no unassigned drives. As this controller only had one logical drive, all disk drives were assigned there.

Ok, so then it should be possible to extend the logical drive. And it is, with the following command:

/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 1 modify size=max forced

This command extends the logical drive, so it is going to use the whole disk space as defined in the RAID.

After that, I was able to increase the VMFS datastore capacity through the vSphere web client without problems.

Create VAAI supported iSCSI LUNs on a Synology NAS

Today I was working in my homelab. A few days ago I started to rebuild it. Initially, I started with three DELL PowerEdge servers for some time, but the consumed to much power and produced to much heat. My approach was to have as many physical components as possible. Well, it didn’t work so well as planned (more here, but that wasn’t all…).

I’m running now only one PowerEdge server, installed 144GB of memory (did some frankensteining with one other PowerEdge server) and installed some SSD drives. I also installed a PERC H700 RAID controller because my white box HPE H240 HBA doesn’t like RAID much, and my HPE P822 RAID Controller stops the server from booting. But let’s go into the topic, VAAI supported iSCSI LUNs on a Synology NAS. Yes, I already wrote about that topic here. But with the current version of DSM (Disk Station Manager), the feature set changed a little. And you don’t need the VAAI plugin (because it’s only for NFS datastores and currently not supported on vSphere 6.7, ohhh myyy…).

This quick guide should help you to create a VAAI enabled iSCSI datastore on your Synology NAS. It’s a straight-forward guide, and I’m assuming that your Synology BOX is empty. As my NAS came back from repair today, I didn’t care and wiped all disks. So mine is empty now.

But what is VAAI?

Long story short, VAAI stands for “VMware vSphere Storage APIs Array Integration (VAAI)”. Through this API, storage operations, like cloning of a VM, will be offloaded to the storage itself. Not because it’s just possible, but because it’s faster and with less unnecessary data traffic between the ESXi host and the data store.

On a datastore without VAAI / hardware acceleration, the ESXi initiates the process to clone a VM. But instead of the storage, cloning the data blocks itself (for example to another LUN), it’s the ESXi host receiving all the data blocks and sending/writing them to another LUN. On a data store with VAAI / hardware acceleration enabled, the ESXi hosts only initiates the process. All data blocks will be then cloned by the storage itself. To get all the benefits from that, your storage has to support these features. Check with the hardware vendor if your storage is VAAI ready or not.

Note: I’m not good at drawing…

Let’s dive into that topic now.

Read moreCreate VAAI supported iSCSI LUNs on a Synology NAS