vCenter 6.7 Update 2: Error in creating a backup schedule

One of the improvements in vCenter 6.7 Update 2 includes Samba (SMB) protocol support for the built-in File-Based Backup and Restore. Excited about the news, I decided to test this functionality and backup data to the Windows share.

I filled in the backup schedule parameters in the vCenter Server Appliance Management Interface (VAMI) and pressed the Create button, when the following error message appeared: Error in method invocation module ‘util.Messages’ has no attribute ‘ScheduleLocationDoesNotExist’.

Puzzled with this message and not knowing which log file to inspect, I ran the following command in the local console session on the
vCenter Server Appliance (VCSA):

grep -i ‘ScheduleLocationDoesNotExist’ $(find /var/log/vmware/ -type f -name ‘*.log’)

The search results led me to /var/log/vmware/applmgmt/applmgmt.log where I found another clue:

2019-04-30T01:25:24.111 [2476]ERROR:vmware.appliance.backup_restore.schedule_impl:Failed to mount the cifs share // at /storage/remote/backup/cifs/; Err: rc=32, stdOut:, stdErr: mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

At first, after some reading, I thought it was related to the SMB protocol version or the wrong security type for the server. So I decided to look for any security events on the file server.

In Windows Event Log, I saw the following:

After double-checking the NTFS and share permissions for the network share, I was confident that the user had permissions to access it and write data into it.

Run out of ideas, I was just looking into the official documentation and some blog posts to see if something was missing. What stroke me was no references to the domain name, neither in a UPN format nor in a form of sAMAccountName, in the backup server credentials in the Create Backup Schedule wizard.

It was easy for me to test if skipping the domain name would make any difference, and it did! The backup job worked like a charm and was completed successfully.

A Home Lab Basics: Part 1 – An automated ESXi installation

Over time almost every virtualisation specialist asks himself a simple question: ‘Do I need a home lab?’

In recent years, this topic has become more and more popular. Many top bloggers write at least once about their experience with building a home lab; some contribute more to the community providing scripted installs, OVA-templates for the nested virtualisation, and even drivers for unsupported devices.

There is plenty of choice in terms of hardware platforms and networking devices to build your lab (including Raspberry Pi), and the sky’s the limit.

My preference would be for the all-in-one solution, and the rest is done in the nested environment. It should be relatively compact and quiet, with minimum wired connections to the router – an ideal option for someone leaving in the apartment.

As a result of my research, I bought Lenovo P-series ThinkStation with two Intel Xeon CPUs and 80 GB of RAM a few years ago. Instead of using magnetic drives, I put in NVMe M.2 SSDs (used for the VMFS and vSAN datastores) and one USB flash drive for the ESXi boot partitions. A workstation has two onboard 1 Gbps network cards and I have added an additional quad-port 1 Gbps PCIe card to test different configurations (bonding, path-through, etc). All NICs are connected to the router which provides access to the home network and to the Internet.

This platform is sufficient for setting up the vSphere, vSAN, and vRealize Automation labs.

In a serious of articles, I am planning to show how to automate different parts of those labs. It all starts here with the scripted ESXi installation using a bootable USB flash drive.

To create a bootable media, we need to do the following steps:

  1. Format a USB flash drive to boot the ESXi Installer.
  2. Copy files from the ESXi ISO image to the USB flash drive.
  3. Modify the configuration file SYSLINUX.CFG.
  4. Modify the configuration file BOOT.CFG.
  5. Create an answer file KS.CFG.

In the paragraphs below, I am going to discuss those steps in detail.

Step #1 – Format a USB flash drive

Depending on the operating system, the process can vary. In the official documentation, VMware details this task for Linux. To have it done on the computer with Mac OS, I used steps described in the blog posts here and here.

Firstly, we need to identify the USB disk using the diskutil list command. In my case, it is /dev/disk2.

Then, we erase that disk using the diskutil eraseDisk command:

diskutil eraseDisk FAT32 ESXIBOOT MBRFormat /dev/disk2
Started erase on disk2
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk2s1 as MS-DOS (FAT32) with name ESXIBOOT
512 bytes per physical sector
/dev/rdisk2s1: 7846912 sectors in 980864 FAT32 clusters (4096 bytes/cluster)
bps=512 spc=8 res=32 nft=2 mid=0xf8 spt=32 hds=255 hid=2048 drv=0x80 bsec=7862272 bspf=7664 rdcl=2 infs=1 bkbs=6
Mounting disk
Finished erase on disk2

It is important to choose the MBR format for the disk (MBRFormat option). Otherwise, when you boot from this USB, the ESXi won’t be able to copy data from that partition and will generate the following error message: ‘exception.HandledError: Error (see log for more info): cannot find kickstart file on usb with path — /KS.CFG.’

As a result, you will have one MS-DOS FAT32 partition /dev/disk2s1. The next step is to mark it as active and bootable:

diskutil unmount /dev/disk2s1
Volume ESXIBOOT on disk2s1 unmounted

sudo fdisk -e /dev/disk2
fdisk: 1> flag 1
Partition 2 marked active.
fdisk:*1> write
Writing MBR at offset 0.
fdisk: 1> quit

Before copying any data into USB, we need to remount /dev/disk2s1:

diskutil unmount /dev/disk2s1
Volume ESXIBOOT on disk2s1 unmounted
diskutil mount /dev/disk2s1
Volume ESXIBOOT on /dev/disk2s1 mounted

Step #2 – Copy the ESXi ISO image files to the USB flash drive

There are two simple steps – mount the ISO file and copy data to the USB drive. In the example below, I used the VMware ESXi 6.7 Update 2 image.

hdiutil mount VMware-VMvisor-Installer-6.7.0.update02-13006603.x86_64.iso

cp -R /Volumes/ESXI-6.7.0-20190402001-STANDARD/ /Volumes/ESXIBOOT/

Step #3 – Modify SYSLINUX.CFG

We need to rename ISOLINUX.CFG to SYSLINUX.CFG:


Then we define a location of BOOT.CFG (here ‘-p 1‘ refers to /dev/disk2s1):

sed -e ‘/-c boot.cfg/s/$/ -p 1/’ -i _BACK /Volumes/ESXIBOOT/SYSLINUX.CFG

Step #4 – Modify BOOT.CFG

Now we need to add a path to the answer file (ks=usb:/KS.CFG) into the boot loader (BOOT.CFG).

However, there are two boot loaders available with the image – one for the BIOS boot, and another one for EFI.

find /Volumes/ESXIBOOT -type f -name ‘BOOT.CFG’

So it makes sense to edit both of them to eliminate any possible issues.

sed -e ‘s+cdromBoot+ks=usb:/KS.CFG+g’ -i _BACK $(find /Volumes/ESXIBOOT -type f -name ‘BOOT.CFG’)

In the example above, I created a backup of the original BOOT.CFG files and replace ‘cdromBoot‘ with ‘ks=usb:/KS.CFG‘ inside them.

Step #5 – Create KS.CFG

Finally, we can work on the answer file that will be used to automate the ESXi host installation.

In a basic scenario, the KS.CFG file should include the following:

  • Accept VMware License agreement,
  • Set the root password,
  • Choose the installation path,
  • Set the network settings,
  • Reboot the host after installation is completed.

A best practice would be to encrypt the root password. This can be done using OpenSSL:

openssl passwd -1

To identify the installation path, I normally boot ESXi with a dummy installation script and then use a local console to search for the device names in /vmfs/devices/disks. An MPX format is a preferable option for the disk device name.

A sample installation script is shown below.

# Accept VMware EULA
# Set the root password (default: VMware!)
rootpw --iscrypted $1$DK/hzmaO$ZFEGEeAv3rNc8gqvhY2it1
# Clear VMFS partitions (if exist) on the USB drive
clearpart --drives=/vmfs/devices/disks/mpx.vmhbaXX:C0:T0:L0
# Install ESXi on the USB drive
install --drive=/vmfs/devices/disks/mpx.vmhbaXX:C0:T0:L0
# Set the static network settings
network --bootproto=static --device=a0:b1:c2:d3:e4:f5 --ip= --gateway= --netmask= --nameserver= --hostname=esxi-host.domain.local --addvmportgroup=0
# Reboot host
view raw KS.CFG hosted with ❤ by GitHub

In the next post, I will show how to complete the initial server configuration using PowerCLI.

A tip when using the vCenter Server Converge Tool

As you might know, VMware is dropping support for the external Platform Services Controller (PSC) deployment model with the next major release of vSphere.

To make a smooth transition, you have to use a command-line interface of the vCenter Server Converge Tool to do the migration to an embedded PSC. This functionality is available starting from vCenter Server 6.7 Update 1 and also in vCenter Server 6.5 Update 2d and onward.

With the upcoming release of vSphere 6.7 Update 2, there will be an option to complete the whole migration using the vSphere Client – super easy!

Meanwhile, the process of moving from an external PSC deployment to the embedded one using CLI consists of two manual steps – converge and decommission. A detailed instruction of how to prepare for and execute each of those steps is documented in the David Stamen’s post ‘Understanding the vCenter Server Converge Tool‘.

What I found tricky was running the converge step when the external PSC had been previously joined to the child domain in Active Directory. In this case, the vCenter Server Converge Tool precheck ran with the default parameters generates the following error message in vcsa-converge-cli.log:

2019-04-06 03:08:15,979 – vCSACliConvergeLogger – ERROR – AD Identity store present on the
2019-04-06 03:08:15,979 – vCSACliConvergeLogger – INFO – ================ [FAILED] Task: PrecheckSameDomainTask: Running PrecheckSameDomainTask execution failed at 03:08:15 ================
2019-04-06 03:08:15,980 – vCSACliConvergeLogger – DEBUG – Task ‘PrecheckSameDomainTask: Running PrecheckSameDomainTask’ execution failed because [ERROR: Template AD info not providded.], possible resolution is [Refer to the log for details]
2019-04-06 03:08:15,980 – vCSACliConvergeLogger – INFO – =============================================================
2019-04-06 03:08:16,104 – vCSACliConvergeLogger – ERROR – Error occurred. See logs for details.
2019-04-06 03:08:16,105 – vCSACliConvergeLogger – DEBUG – Error message: com.vmware.vcsa.installer.converge.prechecksamedomain: ERROR: Template AD info not providded.

In this example, the refers to the root domain; whereas, the computer object for PSC is in the child domain.

To workaround this issue, I had to use the –skip-domain-handling flag to skip the AD Domain related handling in both precheck and actual converge.

By doing this, the vCenter Server Appliance should be joined to the correct AD domain manually after the converge succeed and before the external PSC will be decommissioned.

vSphere 6.5: Additional considerations when migrating to VMFS-6 – Part 2

In the Part 1 of this series, I was writing about the most common cases which might prevent a successful migration to VMFS-6. There is another one to cover.

For ESXi hosts that boot from a flash storage or from memory, a diagnostic core dump file can also be placed on a shared datastore. You won’t be able to un-mount this datastore without deleting a core dump first.

VMware recommends using an esxcli utility to view/edit the core dump settings. This also can be automated via PowerCLI.

To check if the core dump file exists and is active, please use the following code:

# File name: Get-VMHostCoredumpLocation.ps1
# Description: This script checks the core dump location for ESXi hosts.
# 11/10/2018 - Version 1.0
# - Initial Release
# Author: Roman Dronov (c)
# Get information about the core dump location for ESXi hosts
Write-Host "`nCore Dump Settings:`r" -ForegroundColor Green
ForEach ($vmhost in $(Get-VMHost | sort Name)) {
$esxcli2 = Get-EsxCli -VMHost $vmhost -V2
$esxcli2.system.coredump.file.get.Invoke() | Select @{N='Host Name';E={$vmhost}},@{N='Active Core Dump File';E={$_.Active}} #| Format-List
Clear-Variable vmhost,esxcli2 -Scope Global

To delete an old configuration that points to the VMFS-5 datastore, the following script can help:

# File name: Remove-VMHostCoredumpLocation.ps1
# Description: This script removes a core dump file for the particular ESXi host.
# 11/03/2019 - Version 1.0
# - Initial release
# Author: Roman Dronov (c)
# Define common functions
function ex {exit}
# Get the host name and check it is valid
$vmhosts = Get-VMHost | ? {$_.ConnectionState -eq "Connected" -or $_.ConnectionState -eq "Maintenance"} | ForEach-Object {$_.Name.Split('.')[0]}
$vmhost = (Read-Host -Prompt "`n Please type in the ESXi host name").Split('.')[0]
While ($vmhosts.Contains("$vmhost") -ne "True") {
Write-Host "`n Checking the host exists..." -NoNewline
Write-Host " The host is not reachable." -ForegroundColor Yellow
$vmhost = Read-Host -Prompt "`n Please type in the host name correctly"
$vmhost = $vmhost + "*"
# Get the system configuration
$esxcli2 = Get-EsxCli -VMHost $vmhost -V2
# Activate the current coredump (this is to identify it properly later in this script)
Write-Host "`n Searching for a coredump file and trying to activat it..." -NoNewline
$arguments = $esxcli2.system.coredump.file.set.CreateArgs()
$arguments.Item('enable') = $true
$arguments.Item('smart') = $true
Try {
$activation = $esxcli2.system.coredump.file.set.Invoke($arguments)
Catch [Exception]{
Write-Host " File doen't exist!" -ForegroundColor Yellow
# Get the current coredump configuration
$dumpConfigured = $esxcli2.system.coredump.file.get.Invoke().Configured
# Prompt for the coredump removal
If ($dumpConfigured -ne ''){
Write-Host " File exists." -ForegroundColor Green
Write-Host "`n Current configuration: $dumpConfigured"
$choice = $null
While ("Yes","No" -notcontains $choice) {
$choice = Read-Host -Prompt "`n Would you like to remove this file? (Yes/No)"
Switch ($choice){
"Yes" {
# Remove the coredump file
Write-Host " Removing the old coredump file..." -NoNewline
$arguments = $esxcli2.system.coredump.file.remove.CreateArgs()
$arguments.Item('force') = $true
$arguments.Item('file') = "$dumpConfigured"
$remove = $esxcli2.system.coredump.file.remove.Invoke($arguments)
Write-Host " Done!" -ForegroundColor Green
"No" {
# Exit this script
Write-Host "`n Exiting..."
Write-Host "`n Exiting..."

With this change made you would be able to continue migrating to VMFS-6 without any issue.

If you have any suggestions or concerns, feel free to share them in the comments below.

‘Response with status: 401 Unauthorized for URL’ while performing Update Manager tasks on Firefox in vSphere 6.5

Updating to vCenter Server 6.5 Update 2d has brought long-awaited support for the vSphere Update Manager (VUM) functionality to vSphere Client.

However, you should be aware of some incompatibility between a VUM plug-in in vSphere Client and Mozilla Firefox.

Update Manager gives a ‘Response with status: 401 Unauthorized for URL‘ error message at random occasions when used with the recent versions of this web-browser (tested with Firefox 63.x and 64.x):

Searching on the VMware’s web-site led me to the KB 59696 that describes the similar issue in vSphere 6.7 and the workaround provided worked for me.

First you need to clear the browser cache:

Then stop Firefox from caching pages by modifying the browser’s default configuration:

According to VMware, they are currently working towards resolving this issue in future releases.

IMPORTANT: Removing a virtual machine folder from inventory deletes the VMs within it from disk when using the vSphere Client

VMware has recently released a new KB 65207 warning about this issue.

Normally, all powered off VMs would be unregistered from vCenter when you remove a parent folder that includes those objects using the vSphere Web Client. The same is expected in vSphere Client.

vCenter logs similar events when the folder object was removed in vSphere Web Client or vSphere Client.

In my case, vCenter generated two events – one for deleting the folder, and another one for removing TEST-VM-02 from the inventory.

The only difference is that vCenter deleted TEST-VM-02 from the corresponding datastore when removing TEST-Folder from the inventory. Moreover, no events are generated during the delete operation!!!

This issue affects both VMware vCenter Server 6.5.x and VMware vCenter Server 6.7.x.

Workaround (provided by the vendor): Use the vSphere Web Client (Flex) for unregistering all virtual machines in a VM folder or move the VMs from that folder before removing it.

22/01/2019 – Update 1: This issue has been resolved in VMware vCenter Server 6.7 Update 1b. Please see the release notes for more details.

PowerCLI: Housekeeping after upgrading to the latest version of PowerCLI

With a new version of VMware PowerCLI 11.1.0 released, it is time to plan for an upgrade.

Even if it is simply a matter of one command in PowerShell, you need to remember that the Update-Module cmdlet does not remove the old modules from the operating system (OS). So over the time, it might look a bit messy:

Following a few threads in regards to this issue (here and here), I wrote a simple script that removes the old versions of VMware.PowerCLI and its dependencies from Windows OS.

# File name: Remove-OldPowerCLI.ps1
# Description: This script removes the old versions of VMware.PowerCLI and its dependencies
# 21/12/2018 - Version 1.0
# - Initial Release (based on
# Author: Roman Dronov (c)
$modules = @((Get-Module -ListAvailable | ? {$_.Name -like "VMware*"}).Name | Get-Unique)
foreach ($module in $modules){
$latest = Get-InstalledModule -Name $module
Get-InstalledModule -Name $module -AllVersions | ? {$_.Version -ne $latest.Version} | Uninstall-Module -Force -Verbose

A few minutes later, it looks much cleaner with only the latest version of VMware.PowerCLI remaining:

Hope it will be useful for you as well.

vCSA 6.x: WinSCP fails with the error ‘Received too large SFTP packet’

Back to basics… When you try connecting to vCenter Server Virtual Appliance 6.x (vCSA) using WinSCP, the error message ‘Received too large (1433299822 B) SFTP packet‘ might appear.


This is due to the configuration of vCSA when the default shell used for the root account set to the Appliance Shell.

To fix this issue, VMware recommends switching the vCSA 6.x to the Bash Shell. This can be done in the SSH session with the following command:

chsh -s /bin/bash root

Note: You need to log out from the Appliance Shell and log in back again for the changes to take effect.

[URGENT] vSAN 6.6.1: Potential data loss due to resynchronisation mixed with object expansion

Last week VMware released an urgent hotfix to remediate potential data loss in vSAN 6.6.1 due to resynchronisation mixed with object expansion.

This is a known issue affecting earlier versions of ESXi 6.5 Express Patch 9. The vendor states that a sequence of the following operations might cause it:

  1. vSAN initiates resynchronisation to maintain data availability.
  2. You expand a virtual machine disk (VMDK).
  3. vSAN initiates another resync after the VMDK expansion.

Detailed information about this problem is available in KB 60299.

If you are a vSAN customer, additional considerations are required before applying this hotfix:

  • If hosts have already been upgraded to ESXi650-201810001, you can proceed with this upgrade,
  • If hosts have not been upgraded to ESXi650-201810001, and if an expansion of a VMDK is likely, the in-place expansion should be disabled on all of them by setting the VSAN.ClomEnableInplaceExpansion advanced configuration option to ‘0‘.

The VSAN.ClomEnableInplaceExpansion advanced configuration option is not available in vSphere Client. I use the following one-liner scrips to determine and change its value via PowerCLI:

# To check the current status
Get-VMHost | Get-AdvancedSetting -Name “VSAN.ClomEnableInplaceExpansion” | select Entity, Name, Value | Format-Table -AutoSize

# To disable the in-place expansion
Get-VMHost | Get-AdvancedSetting -Name “VSAN.ClomEnableInplaceExpansion” | ? {$_.Value -eq “1”} | Set-AdvancedSetting -Value “0”

Note: No reboot is required after the change.

After hosts were upgraded to ESXi650-201810001 or ESXi650-201811002, you can set VSAN.ClomEnableInplaceExpansion back to ‘1‘ to enable the in-place expansion.