The current ESX agent does not display VMFS volume information so I decided to hack perfparse.sh a bit to make this work.
Basically, I changed perfparse.sh to use vmware's vdf command instead of df -lk and modified the df_k_disp() function to handle the formatting.
So, change DFBIN
DFBIN=”/usr/sbin/vdf”
And df_k_disp function now looks like this …
# We answer the output of the df command in kilobytes but strip some filesystems like proc and shm that are not needed
echo $DATESTR
#$DFBIN | egrep -v '/dev/shm|/proc|/dev/cdrom|/dev/fd0|/dev/pts|usb|:|Filesystem' | $AWKBIN '{ if ( $6 != “/tmp” ) { if ( $2 == “” ) { fs = $1; getline; print fs ” ” $0 } else { print $0 } } }'
$DFBIN | egrep -v '/dev/shm|/proc|/dev/cdrom|/dev/fd0|/dev/pts|usb|:/|^ |Filesystem' | $AWKBIN '{ if ( $6 != “/tmp” ) { if ( $2 == “” ) { fs = $1; getline; print fs ” ” $0 } else { print $0 } } }'
}
A quick rescan of the agent and it's looking good. The only thing is that vdf only shows the hba paths for the filesystem info rather than the LUN name. Massimiliano Daneri of VMTS has written a perl script to parse vdf and display with LUN names. I tried putting this into the df_k_disp fucntion however the LUN name has a space and is in brackets (and is there truncated giving same output as vdf), so I assume the script is seeing this as another field and is there excluding it. I just can't find where it's doing this.
vdf output yields
Filesystem          1k-blocks      Used Available Use% Mounted on
/dev/cciss/c0d0p6Â Â Â Â Â Â 2015904Â Â Â Â 670620Â Â 1242880Â Â 36% /
/dev/cciss/c0d0p1Â Â Â Â Â Â Â Â 51342Â Â Â Â 17771Â Â Â Â 30920Â Â 37% /boot
/dev/cciss/c0d0p7Â Â Â Â Â Â 2015904Â Â Â Â 252572Â Â 1660928Â Â 14% /home
none                    401724        0    401724  0% /dev/shm
/dev/cciss/c0d0p8Â Â Â Â Â Â 493891Â Â Â Â 107989Â Â Â Â 360403Â Â 24% /var
/dev/cciss/c0d0p2Â Â Â Â 10079884Â Â Â Â 32828Â Â 9535016Â Â 1% /vmimages
p-filer-xxxxxx.myhostname.com:/vol/vol010/vmware
                        51200      160    51040  1% /mnt/NAS
vmhba0:0:0:3Â Â Â Â Â Â Â Â Â Â 54072240Â Â 8396720Â Â 45675520Â Â 15% /vmfs/vmhba0:0:0:3
vmhba2:0:146:1Â Â Â Â Â Â Â Â 58924016Â Â 39758832Â Â 19165184Â Â 67% /vmfs/vmhba2:0:146:1
vmhba2:0:147:1Â Â Â Â Â Â Â Â 58924016Â Â 50244592Â Â 8679424Â Â 85% /vmfs/vmhba2:0:147:1
vmhba2:0:148:1Â Â Â Â Â Â Â Â 58924016Â Â 48051184Â Â 10872832Â Â 81% /vmfs/vmhba2:0:148:1
vmhba2:0:149:1Â Â Â Â Â Â Â Â 58924016Â Â 22979568Â Â 35944448Â Â 38% /vmfs/vmhba2:0:149:1
vmhba2:0:150:1Â Â Â Â Â Â Â Â 58924016Â Â 24577008Â Â 34347008Â Â 41% /vmfs/vmhba2:0:150:1
and vdf+ -V
Filesystem          1k-blocks      Used Available Use% Mounted on
/dev/cciss/c0d0p6Â Â Â Â Â Â 2015904Â Â Â Â 670620Â Â 1242880Â Â 36% /
/dev/cciss/c0d0p1Â Â Â Â Â Â Â Â 51342Â Â Â Â 17771Â Â Â Â 30920Â Â 37% /boot
/dev/cciss/c0d0p7Â Â Â Â Â Â 2015904Â Â Â Â 252596Â Â 1660904Â Â 14% /home
none                    401724        0    401724  0% /dev/shm
/dev/cciss/c0d0p8Â Â Â Â Â Â 493891Â Â Â Â 107991Â Â Â Â 360401Â Â 24% /var
/dev/cciss/c0d0p2Â Â Â Â 10079884Â Â Â Â 32828Â Â 9535016Â Â 1% /vmimages
p-filer-xxxxxx.myhostname.com:/vol/vol010/vmware
                        51200      160    51040  1% /mnt/NAS
vmhba0:0:0:3Â Â Â Â Â Â Â Â Â Â 54072240Â Â 8396720Â Â 45675520Â Â 15% /vmfs/vmhba0:0:0:3 (local)
vmhba2:0:146:1Â Â Â Â Â Â Â Â 58924016Â Â 39758832Â Â 19165184Â Â 67% /vmfs/vmhba2:0:146:1 (San0)
vmhba2:0:147:1Â Â Â Â Â Â Â Â 58924016Â Â 50244592Â Â 8679424Â Â 85% /vmfs/vmhba2:0:147:1 (San1)
vmhba2:0:148:1Â Â Â Â Â Â Â Â 58924016Â Â 48051184Â Â 10872832Â Â 81% /vmfs/vmhba2:0:148:1 (San2)
vmhba2:0:149:1Â Â Â Â Â Â Â Â 58924016Â Â 22979568Â Â 35944448Â Â 38% /vmfs/vmhba2:0:149:1 (San3)
vmhba2:0:150:1Â Â Â Â Â Â Â Â 58924016Â Â 24577008Â Â 34347008Â Â 41% /vmfs/vmhba2:0:150:1 (Templates)
I would really like to have the LUN names displayed as the hba mapping are not that intuitive.
So in the uptime console my Disk -> File System Capacity stats look like this
/vmfs/vmhba0:0:0:3Â Â Â Â 51.57 GB Â Â Â Â 8.01 GB Â Â Â Â 43.56 GB Â Â Â Â 15 % Â Â Â Â vmhba0:0:0:3
/vmfs/vmhba2:0:146:1Â Â Â Â 56.19 GB Â Â Â Â 37.92 GB Â Â Â Â 18.28 GB Â Â Â Â 67 % Â Â Â Â vmhba2:0:146:1
/vmfs/vmhba2:0:147:1Â Â Â Â 56.19 GB Â Â Â Â 47.92 GB Â Â Â Â 8.28 GB Â Â Â Â 85 % Â Â Â Â vmhba2:0:147:1
/vmfs/vmhba2:0:148:1Â Â Â Â 56.19 GB Â Â Â Â 45.83 GB Â Â Â Â 10.37 GB Â Â Â Â 81 % Â Â Â Â vmhba2:0:148:1
/vmfs/vmhba2:0:149:1Â Â Â Â 56.19 GB Â Â Â Â 21.92 GB Â Â Â Â 34.28 GB Â Â Â Â 38 % Â Â Â Â vmhba2:0:149:1
/vmfs/vmhba2:0:150:1Â Â Â Â 56.19 GB Â Â Â Â 23.44 GB Â Â Â Â 32.76 GB Â Â Â Â 41 % Â Â Â Â vmhba2:0:150:1
When I want them to look like this …
/vmfs/local        51.57 GB     8.01 GB     43.56 GB     15 %     vmhba0:0:0:3
/vmfs/San0Â Â Â Â Â Â Â Â 56.19 GB Â Â Â Â 37.92 GB Â Â Â Â 18.28 GB Â Â Â Â 67 % Â Â Â Â vmhba2:0:146:1
/vmfs/San1Â Â Â Â Â Â Â Â 56.19 GB Â Â Â Â 47.92 GB Â Â Â Â 8.28 GB Â Â Â Â 85 % Â Â Â Â vmhba2:0:147:1
/vmfs/San2Â Â Â Â Â Â Â Â 56.19 GB Â Â Â Â 45.83 GB Â Â Â Â 10.37 GB Â Â Â Â 81 % Â Â Â Â vmhba2:0:148:1
/vmfs/San3Â Â Â Â Â Â Â Â 56.19 GB Â Â Â Â 21.92 GB Â Â Â Â 34.28 GB Â Â Â Â 38 % Â Â Â Â vmhba2:0:149:1
/vmfs/Templates        56.19 GB     23.44 GB     32.76 GB     41 %     vmhba2:0:150:1
Can you help?
Thanks
Nick