Score:0

ubuntu zfs zpool lost after reboot due to /dev/sd* name re-ordered

ax flag

create zfs pool with:

$ sudo zpool create -f data raidz2 sda sdb sdc sdd
$ sudo zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    data        ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        sda     ONLINE       0     0     0
        sdb     ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0

errors: No known data errors

lsblk

$ sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0    55M  1 loop  /snap/core18/1880
loop1         7:1    0  71.3M  1 loop  /snap/lxd/16099
loop2         7:2    0  29.9M  1 loop  /snap/snapd/8542
sda           8:0    0   2.7T  0 disk  
├─sda1        8:1    0   2.7T  0 part  
└─sda9        8:9    0     8M  0 part  
sdb           8:16   0   2.7T  0 disk  
├─sdb1        8:17   0   2.7T  0 part  
└─sdb9        8:25   0     8M  0 part  
sdc           8:32   0   2.7T  0 disk  
├─sdc1        8:33   0   2.7T  0 part  
└─sdc9        8:41   0     8M  0 part  
sdd           8:48   0   2.7T  0 disk  
├─sdd1        8:49   0   2.7T  0 part  
└─sdd9        8:57   0     8M  0 part  
sde           8:64   1   961M  0 disk  
├─sde1        8:65   1   914M  0 part  
└─sde2        8:66   1   3.9M  0 part  
sdf           8:80   0 111.8G  0 disk  
├─sdf1        8:81   0   512M  0 part  /boot/efi
└─sdf2        8:82   0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /
nvme0n1     259:0    0 111.8G  0 disk  
├─nvme0n1p1 259:1    0   512M  0 part  
└─nvme0n1p2 259:2    0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /

then reboot, the /dev/sd* rename re-ordered, zpool status shows no pool available


$ zpool status
no pools available

$ sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0    55M  1 loop  /snap/core18/1880
loop1         7:1    0  71.3M  1 loop  /snap/lxd/16099
loop2         7:2    0  29.9M  1 loop  /snap/snapd/8542
sda           8:0    0 111.8G  0 disk  
├─sda1        8:1    0   512M  0 part  /boot/efi
└─sda2        8:2    0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /
sdb           8:16   1   961M  0 disk  
├─sdb1        8:17   1   914M  0 part  
└─sdb2        8:18   1   3.9M  0 part  
sdc           8:32   0   2.7T  0 disk  
├─sdc1        8:33   0   2.7T  0 part  
└─sdc9        8:41   0     8M  0 part  
sdd           8:48   0   2.7T  0 disk  
├─sdd1        8:49   0   2.7T  0 part  
└─sdd9        8:57   0     8M  0 part  
sde           8:64   0   2.7T  0 disk  
├─sde1        8:65   0   2.7T  0 part  
└─sde9        8:73   0     8M  0 part  
sdf           8:80   0   2.7T  0 disk  
├─sdf1        8:81   0   2.7T  0 part  
└─sdf9        8:89   0     8M  0 part  
nvme0n1     259:0    0 111.8G  0 disk  
├─nvme0n1p1 259:1    0   512M  0 part  
└─nvme0n1p2 259:2    0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /
Score:0
ax flag

create a partition on /dev/sda, /dev/sda1 (when partition used 't' to choose 48 Solaris /usr & Apple ZFS, not sure if matter here) then use UUID of /dev/sda1 as zfs member seems resolved the issue

Disk /dev/sda: 2.75 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: Hitachi HUA72303
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E22EFAF2-BB4B-459A-A22F-D772E84C3C9E

Device     Start        End    Sectors  Size Type
/dev/sda1   2048 5860533134 5860531087  2.7T Solaris /usr & Apple ZFS

$ lsblk --ascii -o NAME,PARTUUID,LABEL,PATH,FSTYPE
NAME        PARTUUID                             LABEL                           PATH           FSTYPE

sda                                                                              /dev/sda       
`-sda1      4cad5b3d-7348-ef4b-808e-2beace6e9a21                                 /dev/sda1      

repeat the partition for sdb, sdc, sdd

result in

$ ls -l /dev/disk/by-partuuid/
total 0
lrwxrwxrwx 1 root root 10 Apr 17 01:55 361fad97-fa34-604b-8733-3c08147ab32e -> ../../sdb1

lrwxrwxrwx 1 root root 10 Apr 17 01:53 4cad5b3d-7348-ef4b-808e-2beace6e9a21 -> ../../sda1

lrwxrwxrwx 1 root root 10 Apr 17 01:56 88445cda-d61e-ea4c-9f73-4f151996f4a0 -> ../../sdd1

lrwxrwxrwx 1 root root 10 Apr 17 01:55 fda16bae-fe77-4b4e-9eae-8fdecd2bfd80 -> ../../sdc1

then use the partuuid to create zfs pool

$ sudo zpool create data /dev/disk/by-partuuid/4cad5b3d-7348-ef4b-808e-2beace6e9a21  /dev/disk/by-partuuid/361fad97-fa34-604b-8733-3c08147ab32e /dev/disk/by-partuuid/fda16bae-fe77-4b4e-9eae-8fdecd2bfd80 /dev/disk/by-partuuid/88445cda-d61e-ea4c-9f73-4f151996f4a0
igdvs@srv-bk-vm:~$ sudo zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

    NAME                                    STATE     READ WRITE CKSUM
    data                                    ONLINE       0     0     0
      4cad5b3d-7348-ef4b-808e-2beace6e9a21  ONLINE       0     0     0
      361fad97-fa34-604b-8733-3c08147ab32e  ONLINE       0     0     0
      fda16bae-fe77-4b4e-9eae-8fdecd2bfd80  ONLINE       0     0     0
      88445cda-d61e-ea4c-9f73-4f151996f4a0  ONLINE       0     0     0

errors: No known data errors

reboot the ubuntu, /dev/sd* name re-ordered, but zpool not affected since it is on partuuid

$ sudo zpool status
[sudo] password for igdvs: 
  pool: data
 state: ONLINE
  scan: none requested
config:

    NAME                                    STATE     READ WRITE CKSUM
    data                                    ONLINE       0     0     0
      4cad5b3d-7348-ef4b-808e-2beace6e9a21  ONLINE       0     0     0
      361fad97-fa34-604b-8733-3c08147ab32e  ONLINE       0     0     0
      fda16bae-fe77-4b4e-9eae-8fdecd2bfd80  ONLINE       0     0     0
      88445cda-d61e-ea4c-9f73-4f151996f4a0  ONLINE       0     0     0

errors: No known data errors
igdvs@srv-bk-vm:~$ sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0    55M  1 loop  /snap/core18/1880
loop1         7:1    0  63.3M  1 loop  /snap/core20/1852
loop2         7:2    0  91.9M  1 loop  /snap/lxd/24061
loop3         7:3    0  49.9M  1 loop  /snap/snapd/18596
loop4         7:4    0  71.3M  1 loop  /snap/lxd/16099
loop5         7:5    0  55.6M  1 loop  /snap/core18/2721
sda           8:0    0 111.8G  0 disk  
├─sda1        8:1    0   512M  0 part  /boot/efi
└─sda2        8:2    0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /
sdb           8:16   1   961M  0 disk  
├─sdb1        8:17   1   951M  0 part  
└─sdb9        8:25   1     8M  0 part  
sdc           8:32   0   2.7T  0 disk  
└─sdc1        8:33   0   2.7T  0 part  
sdd           8:48   0   2.7T  0 disk  
└─sdd1        8:49   0   2.7T  0 part  
sde           8:64   0   2.7T  0 disk  
└─sde1        8:65   0   2.7T  0 part  
sdf           8:80   0   2.7T  0 disk  
└─sdf1        8:81   0   2.7T  0 part  
nvme0n1     259:0    0 111.8G  0 disk  
├─nvme0n1p1 259:1    0   512M  0 part  
└─nvme0n1p2 259:2    0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /
Score:0
mk flag
rfm

While I agree that using the by-id names is best, zfs usually copes ok with disks moving around because the zpool is imported at boot time. I'm not sure why it didn't this time. But the quick way to recover from such problems is the zpool import command. Depending on the exact problem an import may need to specify the zpool name, or various levels of force options; please read the man page if you're in this situation and found this answer from search.

Score:0
ax flag

here is an even better answer, use /dev/disk/by-id/<serial number of disk>

$ ls -l /dev/disk/by-id/

lrwxrwxrwx 1 root root  9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A -> ../../sdd
lrwxrwxrwx 1 root root  9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A -> ../../sdc
lrwxrwxrwx 1 root root  9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA -> ../../sdb

$ sudo zpool create data /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A

$ zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

    NAME                                         STATE     READ WRITE CKSUM
    data                                         ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA  ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA  ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A  ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A  ONLINE       0     0     0

errors: No known data errors

reboot, sd* name re-ordered, but not affect zfs pool

$ sudo zpool status
[sudo] password for igdvs: 
  pool: data
 state: ONLINE
  scan: none requested
config:

    NAME                                         STATE     READ WRITE CKSUM
    data                                         ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA  ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA  ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A  ONLINE       0     0     0
      scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A  ONLINE       0     0     0

errors: No known data errors
igdvs@srv-bk-vm:~$ sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0    55M  1 loop  /snap/core18/1880
loop1         7:1    0  55.6M  1 loop  /snap/core18/2721
loop2         7:2    0  71.3M  1 loop  /snap/lxd/16099
loop3         7:3    0  49.9M  1 loop  /snap/snapd/18596
loop4         7:4    0  63.3M  1 loop  /snap/core20/1852
loop5         7:5    0  91.9M  1 loop  /snap/lxd/24061
sda           8:0    0 111.8G  0 disk  
├─sda1        8:1    0   512M  0 part  /boot/efi
└─sda2        8:2    0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /
sdb           8:16   0   2.7T  0 disk  
├─sdb1        8:17   0   2.7T  0 part  
└─sdb9        8:25   0     8M  0 part  
sdc           8:32   0   2.7T  0 disk  
├─sdc1        8:33   0   2.7T  0 part  
└─sdc9        8:41   0     8M  0 part  
sdd           8:48   0   2.7T  0 disk  
├─sdd1        8:49   0   2.7T  0 part  
└─sdd9        8:57   0     8M  0 part  
sde           8:64   0   2.7T  0 disk  
├─sde1        8:65   0   2.7T  0 part  
└─sde9        8:73   0     8M  0 part  
sdf           8:80   1   961M  0 disk  
├─sdf1        8:81   1   951M  0 part  
└─sdf9        8:89   1     8M  0 part  
nvme0n1     259:0    0 111.8G  0 disk  
├─nvme0n1p1 259:1    0   512M  0 part  
└─nvme0n1p2 259:2    0 111.3G  0 part  
  └─md0       9:0    0 111.2G  0 raid1 
    └─md0p1 259:3    0 111.2G  0 part  /
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.