I have a bare metal server running vSphere/ESXi 7.0 U3d, and a bare metal NAS running TrueNAS Core 13.0. The NAS has a single pool shared via iSCSI; the pool is running all defaults (lz4 compression+SHA512, no dedup, 128kb record size). For setup purposes, the shares are wide open with no security restrictions. The machines are connected to the same switch and are on the same VLAN, and can communicate freely. iSCSI is configured in vSphere using the software adapter and dynamic discovery with no authentication.
The iSCSI disk is visible in vSphere and shows the correct size etc. Creating a VMFS5 datastore on the disk completes as expected without errors. Creating a VMFS6 datastore on the disk, however, fails with a message to check the vmkernel.log. I didn't see any helpful messages here, but admittedly it's a huge log and I'm not sure what I'm looking for. Anyways, when this happens, it appears to create the partitions successfully, and the capacity graphs and such become visible, but the datastore is not fully created and can't be mounted.
Things I've tried:
- Turning off compression (no change)
- Changing record size (tried several options from 32kb up) (no change)
- Creating the datastore from the vSphere CLI with partedUtil and vmkfstools (works up until the final vmkfstools command to create the datastore, which fails with an "invalid parameter" error despite the parameters being 100% correct, checked and triple-checked)
- Booting with my Linux disk and using fdisk to manually clear each and every disk in the NAS before rebuilding the pool completely (no change)
I feel like there's probably something simple I've overlooked, but I've yet to figure out what. I know the setup is fundamentally sound, because I have another of the same setup with the same settings that works just fine. Incidentally, if I present the new iSCSI disk to that vSphere instance, I can't create the datastore from there, either, which suggests the problem is on the TrueNAS side.
Appreciate any guidance!