vSphere 6.5: Switching to Native Drivers in ESXi 6.5

The Native Device Driver architecture is not something new. Since its introduction more than five years ago, VMware encourages their hardware ecosystem partners to work on developing native drivers. A list of supported hardware is growing with every major release of ESXi, with the company’s aim to deprecate the vmkLinux APIs and associated driver ecosystem completely in the future releases of vSphere.

The benefits of using the native drivers are as follows:

  • It removes the complexity of developing and maintaining Linux derived drivers,
  • It improves the system performance,
  • It frees from the functional limitations of Linux derived drivers,
  • It increases the stability and reliability of the hypervisor, as native drivers are designed specifically for VMware ESXi.

Saying that one of the steps when upgrading to a new version of vSphere is to check that the hardware supports native drivers. By default, if ESXi identifies a native driver for a device it will be loaded instead of Linux derived driver. However, it is not always a case, and you need to check whether native drivers are in use after the system upgrade.

Following steps in KB 1031534 and KB 1034674, you can pinpoint PCI devices and corresponding drivers loaded for each of them:

  • To identify a storage HBA (such as a fibre card or RAID controller), run this command:

# esxcfg-scsidevs -a

  • To identify a network card, run this command:

# esxcfg-nics -l

  • To list device state and note the hardware IDs, run this command:

# vmkchdev -l

The /etc/vmware/default.map.d/ folder on ESXi host contains a full list of map files referring to the native drivers available for your system.


To quickly identify the driver version, you can run this command:

# esxcli software vib list | grep <native_driver_name>

In addition, information about available vSphere Installation Bundles (VIBs) in vSphere 6.5 can be found via the web client or PowerCLI session:

  • To view all installed VIBs in vSphere Client (HTML5), open Configure > System > Packages tab in the host settings:
  • To view all installed VIBs in VMware Host Client, open Manage > Packages tab in the host settings:
  • To list all installed VIBs in PowerCLI, run this command:

# (Get-VMHost -Name ‘<host_name>‘ | Get-EsxCli).software.vib.list() | select Name,Vendor,Version | sort Name

Comparing findings above with information in the IO Devices section in VMware Hardware Compatibility List, you would be able to find out whether native drivers available for your devices, as well as the recommended combination of the driver and firmware, tested and supported by VMware.

It worth reading the release notes for the corresponding drivers and search any reference to it on VMware and the third-party vendors’ websites, in case there are any known issues or limitations that might affect how device function.

If everything seems good, it is time to enable the native driver following steps in KB 2147565:

# esxcli system module set –enabled=true –module=<native_driver_name>

This change requires a host reboot and a thorough testing afterwards. The following commands can be quite helpful when troubleshooting native drivers:

  • To get the driver supported module parameters, run this command:

# esxcfg-module -i <native_driver_name>

  • To get the driver info, run this command:

# esxcli network nic get -n <vmnic_name>

  • To get an uplink stats, run this command:

# esxcli network nic stats -n <vmnic_name>

31/08/2018 – Update 1: After some feedback provided, I have decided to list well-known issues with the native drivers that exist currently. They are as follows:

  • The Mellanox ConnectX-4/ConnectX-5 native ESXi driver might exhibit performance degradation when its Default Queue Receive Side Scaling (DRSS) feature is turned on (Reference: vSphere 6.7 Release Notes),
  • Native software FCoE adapters configured on an ESXi host might disappear when the host is rebooted (Reference: vSphere 6.7 Release Notes),
  • HP host with QFLE3 Driver Version experienced a PSOD or stuck at “Shutting down device drivers…” shutdown or restart (Reference: KB 55088),
  • ESXi 6.5 Storage Performance Issues and Fix (Reference: Anthony Spiteri’s blog).

18/02/2019 – Update 2: There are other articles on VMware for the owners of Broadcom NICs that require attention:

3 thoughts on “vSphere 6.5: Switching to Native Drivers in ESXi 6.5

  1. I ran across your article while I was working on trying to investigate the impact of an HP Notice (https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00044039en_us&docLocale=en_US) on our environment after experiencing a PSOD related to having the non-native bnx2x driver on one of our 6.5 hosts. I had also been pointed to the VMware KB 2147565 to fix the issue. But with having a couple of thousand hosts to check to make sure they weren’t affected, I went looking for an automated way to do it and found your article to give me a good start with your use of PowerCli to query the VIBs installed on a single host. But after doing that to make sure I was dealing with the correct versions impacted, I took it a couple of steps further to get whether the bnx or the native drivers were loaded using this little bit of code:

    Get-VMHost | Sort @{E={$_.Uid.Split(‘@:’)[1]}}, Parent, Name | %{
    $vmhost = $_
    $esxcli = Get-EsxCli -VMHost $vmHost -V2
    $esxcli.system.module.list.Invoke() | where {$_.Name -like “qfl*” -or $_.Name -like “bnx*”} | Select @{N=’vCenter’;E={$vmhost.Uid.Split(‘@:’)[1]}}, @{N=’Cluster’;E={$vmhost.Parent}}, @{N=’Host’;E={$vmhost.Name.Split(‘.’)[0]}}, @{N=’Version’;E={$vmhost.Version}}, @{N=’Manufacturer’;E={$vmhost.Manufacturer}}, @{N=’Model’;E={$vmhost.Model}}, Name, IsEnabled, IsLoaded
    } | Ft -Autosize

    Since I found it so useful, I thought I would share with others that might run across your blog article.

    I was then able to tweak that base above and build an automated method that actually activated the Native Drivers on all of the hosts in each cluster. Using something like this I was able to test it in the lab:

    Get-Cluster “Cluster Name” | Get-VMHost | Sort Name | %{
    $vmhost = $_
    $esxcli = Get-EsxCli -VMHost $vmHost -V2
    if (($esxcli.system.module.list.Invoke() | where {$_.Name -like “qfle3”} | select -ExpandProperty IsEnabled) -eq ‘false’)
    $esxcli.system.module.set.Invoke(@{enabled = ‘true’; module =’qfle3′})
    Write-host -ForegroundColor Cyan “$VMhost qfle3 set to true”
    $VMHost | select @{N=’vCenter’;E={$_.Uid.Split(‘@:’)[1]}}, @{N=’Cluster’;E={$_.Parent}},@{N=’Host’;E={$_.Name.Split(‘.’)[0]}}, @{N=’BootTime’;E={$_.ExtensionData.Runtime.BootTime}} | export-csv -Path “c:\temp\native-reboot.csv” -Append -NoTypeInformation
    I finally used a script to reboot them the hosts in that cluster one at a time to take effect.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s