BugZero updated this defect 26 days ago.
Data sources
All data on this page is proprietary to BugZero® or gathered from public sources
4/23/2024
vSphere ESXi
7.0
No fixed releases provided.
vmkernel log sporadically spams messages similar to following: xxxx-xx-xxT04:39:02.105Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d28a:1 due to No filesystem on the device xxxx-xx-xxT04:39:02.113Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d290:1 due to No filesystem on the device xxxx-xx-xxT04:39:02.125Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d295:1 due to No filesystem on the device xxxx-xx-xxT04:39:11.387Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d2bb:1 due to No filesystem on the device xxxx-xx-xxT04:39:11.390Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d2c1:1 due to No filesystem on the device xxxx-xx-xxT04:39:11.393Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d2c6:1 due to No filesystem on the device xxxx-xx-xxT04:39:11.396Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d2cb:1 due to No filesystem on the device xxxx-xx-xxT04:39:11.409Z cpu34:2100630)LVM: 7053: Failed to probe the VMFS header of naa.6000144000000010f017701350c7d2bb:1 due to No filesystem on the device Mostly the symptom shows right after a host acknowledges VMFS volume.
This document's purpose is to inform the CXS engineers on the impact of VMFS probe failure messages.
A host initiates header probing on any VMFS volume it encounters. If a volume is consisted of multiple extents, the probing is done only on the first extent since it is the location where a filesystem header resides.The above log spamming might happen when a hosts initiates said header probing on every physical extents within a volume.
If the host does not show any particular storage issues, the symptom is mostly benign.
To check whether the probe failure is occurring on the first extent or not, open CLI (SSH or DCUI) to the host and enter following script: for dir in $(find /vmfs/volumes/ -type l -mindepth 1 -maxdepth 1 -exec echo {} \;); do res=$(vmkfstools -P -v 10 “$dir” 2> /dev/null | grep -A1 “Partitions spanned” | tail -n1 | sed -e ’s/[[:space:]]//g’); echo “$res” | xargs -I {} grep -q {} /var/run/log/vmkernel.* | grep “Failed to probe the VMFS” && echo “Physical extent $res of datastore $(basename $dir) is missing LVM magic!” || echo “Physical extent $res of datastore $(basename $dir) has LVM metadata intact”; done