Score:0

Problem with MDADM / ZFS

et flag

Just reinstalled Kubuntu version 22 (was trying a 20=>22 LTS support upgrade and it all went badly wrong.)

I'm now encountering problems getting my disk arrays working again. Couldn't remember the format but one seemed to be a 2 disk ZFS array which was fairly easy to get working.

The second pair appeared to be a 2 disk mdadm array. I installed mdadm and it created a conf file.

john@Adnoartina:/cctv/fs1$ cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=8ccafb9b:d754b713:ac709ab4:78ca2f53 name=Adnoartina:0

# This configuration was auto-generated on Wed, 16 Aug 2023 22:44:54 +0100 by mkconf

I had to run the update-initramfs.

I still encountered issues similar to found here ( https://superuser.com/questions/566867/unable-to-reassemble-md-raid-on-drives-pulled-from-readynas-duo-v1 ) "mdadm: failed to add /dev/sdb3 to /dev/md/2_0: Invalid argument"

so I ran the linked suggestion of "--update=devicesize"

This seemed to progress but I still couldn't mount.

hn@Adnoartina:/cctv/fs1$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
loop0     7:0    0     4K  1 loop  /snap/bare/5
loop1     7:1    0  63.4M  1 loop  /snap/core20/1974
loop2     7:2    0 237.2M  1 loop  /snap/firefox/2987
loop3     7:3    0 349.7M  1 loop  /snap/gnome-3-38-2004/143
loop4     7:4    0  91.7M  1 loop  /snap/gtk-common-themes/1535
loop5     7:5    0  53.3M  1 loop  /snap/snapd/19457
sda       8:0    0   7.3T  0 disk  
├─sda1    8:1    0   7.3T  0 part  
└─sda9    8:9    0     8M  0 part  
sdb       8:16   0   7.3T  0 disk  
├─sdb1    8:17   0   7.3T  0 part  
└─sdb9    8:25   0     8M  0 part  
sdc       8:32   0  55.9G  0 disk  
├─sdc1    8:33   0   512M  0 part  /boot/efi
└─sdc2    8:34   0  55.4G  0 part  /var/snap/firefox/common/host-hunspell
                                   /
sdd       8:48   0   2.7T  0 disk  
├─sdd1    8:49   0   2.7T  0 part  
│ └─md0   9:0    0   2.7T  0 raid1 
└─sdd9    8:57   0     8M  0 part  
sde       8:64   0   2.7T  0 disk  
├─sde1    8:65   0   2.7T  0 part  
│ └─md0   9:0    0   2.7T  0 raid1 
└─sde9    8:73   0     8M  0 part  
john@Adnoartina:/cctv/fs1$ sudo mount /dev/md/0 Important/
mount: /cctv/fs1/Important: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
john@Adnoartina:/cctv/fs1$ sudo mount /dev/md0 Important/
mount: /cctv/fs1/Important: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.

fsck wasn't helpful

john@Adnoartina:/cctv/fs1$ sudo fsck -n /dev/md/0
fsck from util-linux 2.37.2
e2fsck 1.46.5 (30-Dec-2021)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

More info

john@Adnoartina:/cctv/fs1$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Mon Jun 22 20:52:51 2020
        Raid Level : raid1
        Array Size : 2930132992 (2.73 TiB 3.00 TB)
     Used Dev Size : 2930132992 (2.73 TiB 3.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Aug 16 23:27:13 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Adnoartina:0  (local to host Adnoartina)
              UUID : 8ccafb9b:d754b713:ac709ab4:78ca2f53
            Events : 2

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

I then saw to run fdisk -l and got the following surprising output where it says the typeof the drives is Solaris and apple ZFS?

Disk /dev/sdd: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: TOSHIBA DT01ACA3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F3AE916B-DB77-9844-98DA-250443CFC5A4

Device          Start        End    Sectors  Size Type
/dev/sdd1        2048 5860515839 5860513792  2.7T Solaris /usr & Apple ZFS
/dev/sdd9  5860515840 5860532223      16384    8M Solaris reserved 1


Disk /dev/sde: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: TOSHIBA DT01ACA3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 20CFD558-A582-8847-BB20-E9585044D859

Device          Start        End    Sectors  Size Type
/dev/sde1        2048 5860515839 5860513792  2.7T Solaris /usr & Apple ZFS
/dev/sde9  5860515840 5860532223      16384    8M Solaris reserved 1


Disk /dev/md0: 2.73 TiB, 3000456183808 bytes, 5860265984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

There's also some mention of bad blocks but I suppose this may be due to the disks starting to age.

john@Adnoartina:/cctv/fs1$ sudo mdadm -E /dev/sde1
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x8
     Array UUID : 8ccafb9b:d754b713:ac709ab4:78ca2f53
           Name : Adnoartina:0  (local to host Adnoartina)
  Creation Time : Mon Jun 22 20:52:51 2020
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5860249600 sectors (2.73 TiB 3.00 TB)
     Array Size : 2930132992 KiB (2.73 TiB 3.00 TB)
  Used Dev Size : 5860265984 sectors (2.73 TiB 3.00 TB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=18446744073709535232 sectors
          State : clean
    Device UUID : 740242b8:a390e2aa:9ccbc034:3346bd6d

    Update Time : Wed Aug 16 23:27:13 2023
  Bad Block Log : 512 entries available at offset 24 sectors - bad blocks present.
       Checksum : eddcc88c - correct
         Events : 2


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
john@Adnoartina:/cctv/fs1$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdd1[0] sde1[1]
      2930132992 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
``

I'm now stuck so any suggestions would be appreciated. Is there any possibility that on my old install I got into a weird mdadm / zfs state (I know I used these drives as mdadm a long time ago and wonder if I'd subsequently changed to be zfs but perhaps not all relevant data got wiped out?)

Thanks,

John
Score:0
et flag

Fixed it - found another page with a suggestion for when one migrates disks between servers and they aren't detected by zpool. I stopped using mdadm and used the following command

sudo zpool import -d /dev/
   pool: ToBackup
     id: 12065074383621521084
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        ToBackup    ONLINE
          mirror-0  ONLINE
            sdd     ONLINE
            sde     ONLINE
john@Adnoartina:~$ sudo zpool import -d /dev/ -f ToBackup

john@Adnoartina:~$ zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ToBackup  2.72T  1.05T  1.67T        -         -     0%    38%  1.00x    ONLINE  -
cctv      14.5T  11.8T  2.71T        -         -    40%    81%  1.00x    ONLINE  -
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.