Powershell: Find largest VM disk

Sometimes you need to find the largest virtual disk. Lets say if you are sizing LUNs for datastores.

Here is a script that help you do that.

Requirement are powershell and the VMware.PowerCLI module.

Use it at your own risk.

Import-Module VMware.PowerCLI

Connect-VIServer <vCenter Name>


Function Get-LargestDisk {
  param(
    $Datastores=$null
  )
  $largest = 0

  if ($Datastores -eq $null) {
    Write-Host "Searching through all VMs."
    $vms = Get-VM
  } else {
    Write-Host "Searching through VMs on datastores: $Datastores"
    $vms = $Datastores | Get-VM
  }

  foreach ($vm in $vms) {
    $hdds = $vm | Get-HardDisk

    foreach ($hdd in $hdds) {
      $size = $hdd.CapacityGB

      if ($size -gt $largest) {
        Write-Host "Found a larger VM: $vm Size: $size GB"
        $largestVm = $vm
        $largest = [math]::Round($size)
      }
    }
  }
  Write-Host "Largest Disk: $largest GB Largest VM: $largestVm"
}

Get-LargestDisk -Datastore (Get-Datastore V7000*)

vSAN – Downgrading NVMe driver in ESXi 6.7 Update 1

Recently ran into a HPE Proliant m510 server running vSAN, where vSAN complained that the controller driver for the NVMe disk where too new.

The health error said that the current driver nvme (1.2.2.17.-1vmw.670.1.28.10302608) was to new and the recommended driver was nvme (1.2.1.34-1vmw.670.0.08169922)

Downgrading is not always a breeze. When going to VMware compatibility guide, the NVMe disk is supported for vSAN 6.7 Update 1, and there are no download links to a specific driver, so how do you get the old driver? Continue reading vSAN – Downgrading NVMe driver in ESXi 6.7 Update 1

Deep Dive into VMware vSAN Performance Benchmarks

VMworld 2017 Breakout Session Proposal Accepted!

I am VERY happy to announce that my application for a VMworld session at VMworld in Barcelona 2017 has been accepted. I will be sharing this session with my excellent coworker Karsten Drejer.

I can’t wait to tell you all about our finding and the awesome performance that we are seeing on VMware vSAN. I will be comparing these benchmark numbers to traditional storage types from known vendors.

Please support me by attending my session at VMworld 2017 in Barcelona. My session ID is #STO1117BE. You can find it here: https://my.vmworld.com/scripts/catalog/eucatalog.jsp?search=STO1117

WARNING: My session is very technical, so please be ware. I will however also have some graphs with pretty colors, so if you are not completely down with IOPS, Read Write latency, bits and bytes, come anyway. I will try hard explain my findings. Also Karsten will give some general knowledge about vSAN in the first part of the session.

VMworld 2017 in Barcelona is running from the 11-14th of September.

Update: Our sessions is scheduled at the 13th of September in Hall 8, Room 17.

FCoE Adapters and datastores missing after vSphere ESXi 6.5 Install

Today I upgraded some HP BL460c Gen9 Blade Servers from ESXi 6.0 to ESXi 6.5. I always reinstall when going from 5.5 to 6.0 or 6.0 to 6.5, so After the server was done installing I found that the FCoE adapters and datastores was missing.

The servers are connected to some HP 3PAR storage using HP FlexFabric 10Gb 2-port 536FLB Adapters.

To regain access to your storage you need to enable the FCoE adapters using the esxcli command.

Continue reading FCoE Adapters and datastores missing after vSphere ESXi 6.5 Install

Storage Optimization for VMware vSphere

This is meant as a dynamic article for looking up best practice settings for different storage arrays when adding them to VMware.

Why modify the default settings?

When datastores are added to an ESXi host, there are multiple ways that ESXi can leverage the storage. In some cases ESXi will use Most Recently Used path (Active/Standby or MRU) by default, which means that you only leverage one path at the time. This could result in a bottleneck in your storage infrastructure. Many arrays are able to handle Round Robin (Multi path Active/Active or RR) By enabling this will distribute your storage traffic onto multiple adapters, provided that you have multiple adapters.

Other settings can involve how many I/O ESXi should send to a path before switching to another path, or advanced settings that alters the way ESXi handles the storage.

Getting these settings correct will most often result in better performance, but can also help you stay out of trouble that can lead to breakdowns. Continue reading Storage Optimization for VMware vSphere