Issue with nic driver on HPE servers after updating HPE drivers on ESXi 6.5
I ran into an issue the other day with a vCenter Server Appliance filling up one of its partitions. The partition that was filling up was the /storage/seat partition. This partition holds the postgres SQL database, so the vCenter server was in trouble.
After some digging around I realized that the root cause was a new event error from all ESXi hosts, that was coming at a rapid pace. The errors had started during the last driver and base updates, and only the HPE servers was affected. Continue reading 10fb does not support flow control autoneg
Just had an odd issue today.
A customer had created a Virtual Distributed Switch, but was unable to add his ESXi hosts to the vDS. It said that: “Host is not compatible with the VDS version.”
He was only able to join his version 6.5 ESXi host to a 5.5 vDS. If it was upgraded to version 6.0 or 6.5 it did not work.
There are multiple reports of this online related to upgraded hosts and vCenters. I suspect that it is an issue that you only run into if you do major upgrades without reinstalling ESXi, and since I never do that I have not had that problem before.
The quick solution to this problem is: Continue reading Host xxx.xxx.xxx.xxx is not compatible with the VDS version
I have had an annoying issues at two customer sites now, and I want to share the solution with you.
The problem is that you cannot VMotion VMs to a newly installed ESXi 6.5 hosts running on Lenovo SR650 hardware. The CPU used in the new host is Intel Xeon Gold 6154 Processor, and the old hosts are using Intel Xeon Processor E7-4880 v2. I do not think that the source CPU model is relevant to the issue it could be any supported Intel CPU in the same cpu family.
When trying to VMotion the following error is displayed:
The virtual machine requires hardware features that are unsupported or disabled on the target host:
"""""""""""""* General incompatibilities
If possible, use a cluster with Enhanced vMotion Compatibility (EVC) enabled; see KB article 1003212.
CPUID details: incompatibility at level 0x1 register 'ecx'.
Host bits: 0110:0010:1101:1000:0011:0010:0000:0011
If you then try to enable EVC in the cluster it complains that the new hosts has an issue, and returns this error:
The host's CPU hardware should support the cluster's current Enhanced vMotion Compatibility mode, but some of the necessary CPU features are missing from the host. Check the host's BIOS configuration to ensure that no necessary features are disabled (such as XD, VT, AES, or PCLMULQDQ for Intel, or NX for AMD). For more information, see KB article 1003212.
Continue reading Unable to VMotion to new Lenovo SR650 Host
Had an annoying error today. Was updating an ESXi image for use with AutoDeploy. When I reinstalled the hosts they would not join vCenter. My workflow removes them from vCenter in the process, but they were unable to rejoin, and I could not add them manually either.
I got two error:
When selected the license in the add host wizard I got this error:
Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.
I pushed through, but when the task reaches 100% it gave another error:
License file download from <servername> to vCenter Server failed due to exception: vmodl.fault.SecurityError.
Well to cut a long story short it turned out to be a time issue. Some of the serveres was not allowed to talk to the NTP servers. and their time had drifted. vCenter was located on one of these serveres, and its time was 5-6 minutes behind the ESXi servers that I was trying to join.
The NTP connection issue was corrected. Time was checked on all servers.
Hope this helps someone.
So I was installing some Fujitsu Primergy RX2530 M4 servers today, and since I mostly work with HPE and Lenovo servers I had lookup the optimal BIOS settings to run ESXi 6.5.
This is what I came up with. From the default settings I only changed a couple of things that I found important. Continue reading Fujitsu Primergy ESXi Install Server Notes – BIOS Settings
Often when I do health checks on vSphere environments I come across VMs that have multiple vNics. That can be a serious security risk if these vNics are connected to different security zones. A VM that is connected both to a DMZ and to a Administration network could allow a hacker easy access to more privileged networks. Sometimes this configuration is acceptable if the operating system is designed to handle it, if for instance we are dealing with a firewall.
I often find VMs that have a configuration where one of the network adapters is disconnected. Sometimes the second vNic was forgotten, and other times it is connected from vCenter when access to the secondary network is wanted for some kind of maintenance.
There is a settings on the virtual network adapter called “allowGuestControl”, and I was wondering if this setting could be a security issue. Could a hacker enable the disconnected network adapter from within the guest operating system, and thereby gain access to a privileged network? Continue reading VMs with multiple vNics could be a security risk
Today I upgraded a customer to ESXi 6.5 Update 1, but unfortunately some of them ended up purple screening at reboot after they were updated.
Affected Servers so far
- HPE BL460c Gen9
- HPE DL360p Gen8 (Reported by anonymous user)
- HPE DL380 Gen9 (Reported by Bernhard)
- HPE DL380 Gen8 (Reported by Ralf)
- HPE DL380p Gen9 (Reported by Victor)
PSOD: #PF Exception 14 in world 68297:sfcb-intelcim IP 0x41801b704d8f addr 0x443919649c000
Continue reading ESXi 6.5 Update 1 PSOD on HPE 460c Gen9 after Ixgben driver update
Sometimes I find it easier to create a new vCenter server then migrate the old one, and it is a perfectly good solution in many cases.
But annoyingly there is a lot of manual work involved.
One problem is the VM’s and Templates folders. They do not follow the host, so you have to create the folder structure manually and move each VM into the correct folder. Well I am way to lazy to do that by hand, so it’s time to Automate! Continue reading Migrate folder structure from old to new vSphere vCenter
Today I upgraded some HP BL460c Gen9 Blade Servers from ESXi 6.0 to ESXi 6.5. I always reinstall when going from 5.5 to 6.0 or 6.0 to 6.5, so After the server was done installing I found that the FCoE adapters and datastores was missing.
The servers are connected to some HP 3PAR storage using HP FlexFabric 10Gb 2-port 536FLB Adapters.
To regain access to your storage you need to enable the FCoE adapters using the esxcli command.
Continue reading FCoE Adapters and datastores missing after vSphere ESXi 6.5 Install
This is meant as a dynamic article for looking up best practice settings for different storage arrays when adding them to VMware.
Why modify the default settings?
When datastores are added to an ESXi host, there are multiple ways that ESXi can leverage the storage. In some cases ESXi will use Most Recently Used path (Active/Standby or MRU) by default, which means that you only leverage one path at the time. This could result in a bottleneck in your storage infrastructure. Many arrays are able to handle Round Robin (Multi path Active/Active or RR) By enabling this will distribute your storage traffic onto multiple adapters, provided that you have multiple adapters.
Other settings can involve how many I/O ESXi should send to a path before switching to another path, or advanced settings that alters the way ESXi handles the storage.
Getting these settings correct will most often result in better performance, but can also help you stay out of trouble that can lead to breakdowns. Continue reading Storage Optimization for VMware vSphere