I just upgraded my oldish Thecus 7 bay RAID storage from Ubuntu 16.04 LTS (server) to 18.04. Everything is fine (it boots from a separate DOM) except that the RAID won't assemble and I am a bit at a loss why - also because I have a hard time to get any sort of error messages out of MDADM.
sudo mdadm --examine /dev/sd[b-h]2
gives me what I would expect:
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : b7f98535:c88ab32e:d0ed4cfd:06b9ea7d
Name : N7700PRO:0
Creation Time : Fri Nov 8 20:05:13 2013
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5855244288 (2792.00 GiB 2997.89 GB)
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : 1bc695f7:a1621559:db15c3d0:2b8f423a
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Oct 31 12:09:41 2021
Checksum : d11e7d19 - correct
Events : 58491
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : b7f98535:c88ab32e:d0ed4cfd:06b9ea7d
Name : N7700PRO:0
Creation Time : Fri Nov 8 20:05:13 2013
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5855244288 (2792.00 GiB 2997.89 GB)
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : 2a6f2090:fff720a3:d99c9fab:f9dfadc5
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Oct 31 12:09:41 2021
Checksum : 19fc94d2 - correct
Events : 58491
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : b7f98535:c88ab32e:d0ed4cfd:06b9ea7d
Name : N7700PRO:0
Creation Time : Fri Nov 8 20:05:13 2013
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5855244288 (2792.00 GiB 2997.89 GB)
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : 4f61132e:9ffe0570:c16d6949:dbc0b756
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Oct 31 12:09:41 2021
Checksum : b3a83de2 - correct
Events : 58491
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : b7f98535:c88ab32e:d0ed4cfd:06b9ea7d
Name : N7700PRO:0
Creation Time : Fri Nov 8 20:05:13 2013
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5855244288 (2792.00 GiB 2997.89 GB)
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : 4da400bb:509cf8b4:5377b144:e30034b1
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Oct 31 12:09:41 2021
Checksum : db4c682b - correct
Events : 58491
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : b7f98535:c88ab32e:d0ed4cfd:06b9ea7d
Name : N7700PRO:0
Creation Time : Fri Nov 8 20:05:13 2013
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5855244288 (2792.00 GiB 2997.89 GB)
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : 50e6eda8:0ed215df:210e1e89:f298151e
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Oct 31 12:09:41 2021
Checksum : a4a50ecc - correct
Events : 58491
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 5
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : b7f98535:c88ab32e:d0ed4cfd:06b9ea7d
Name : N7700PRO:0
Creation Time : Fri Nov 8 20:05:13 2013
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5855244288 (2792.00 GiB 2997.89 GB)
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : 5da7881c:3ea798d5:846551c7:a0f81edc
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Oct 31 12:09:41 2021
Checksum : aff5c1a - correct
Events : 58491
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 4
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdh2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : b7f98535:c88ab32e:d0ed4cfd:06b9ea7d
Name : N7700PRO:0
Creation Time : Fri Nov 8 20:05:13 2013
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 5855244288 (2792.00 GiB 2997.89 GB)
Array Size : 14638110720 (13959.99 GiB 14989.43 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : 7eea318d:c5271920:6dc3649e:d00495e3
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Oct 31 12:09:41 2021
Checksum : a4b289dc - correct
Events : 58491
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 6
Array State : AAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
But for cat /proc/mdstat/
I get:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
A sudo mdadm --assemble --scan
does nothing.
I checked that the upgrade process copied the mdadm.conf files from the old location to /etc/mdadm and that the files are what they should be. It is:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
DEVICE /dev/null
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# ARRAY /dev/md/10 metadata=1.2 UUID=dd3b3236:d90ea6d1:bce9dec4:17146f0b name=N7700PRO:10
ARRAY /dev/md/0 metadata=1.2 UUID=b7f98535:c88ab32e:d0ed4cfd:06b9ea7d name=N7700PRO:0
# ARRAY /dev/md/50 metadata=1.2 UUID=f344ec6e:9a547390:2b59ee57:7ffbae6a name=N7700PRO:50
# This file was auto-generated on Mon, 16 Feb 2015 15:05:40 +0100
# by mkconf $Id$
I checked the mail of root - since this appears to be the way in which mdadm distributes error messages, but it is empty. I also tried email to a local user - the same.
An intereting point: immediately after the upgrade I got an mdadm version 4.1+rc1
or so and I downgraded it to 3.3-2ubuntu7.6
just to try it. this didn't change anything but interestingly during the downgrade I got exactly 7 error messages (I have 7 RAID disks):
blockdev: IOCTL-Fehler bei BLKGETSIZE: Die Datei ist zu groß
blockdev: IOCTL-Fehler bei BLKGETSIZE: Die Datei ist zu groß
blockdev: IOCTL-Fehler bei BLKGETSIZE: Die Datei ist zu groß
blockdev: IOCTL-Fehler bei BLKGETSIZE: Die Datei ist zu groß
blockdev: IOCTL-Fehler bei BLKGETSIZE: Die Datei ist zu groß
blockdev: IOCTL-Fehler bei BLKGETSIZE: Die Datei ist zu groß
blockdev: IOCTL-Fehler bei BLKGETSIZE: Die Datei ist zu groß
(which is German for IOCTL-Error at BLKGETSIZE: The file is too large).
Now an interesting point is that I run a 32 bit Ubuntu. Maybe something is broken with MDADM on 32 bit Ubuntu beyond 16.04, say something which should be an explicit 64 bit int is just an int? I find this a bit odd since it should not be that uncommon to run Ubuntu 32 bit on Thecus Hardware. Btw.: the CPU is a Intel(R) Core(TM)2 Duo CPU T5500 @ 1.66GHz
.
My assumption is that mdadm is broken on 32 bit Ubuntu 18.04.
I installed 64 bit Ubuntu 20.04 on the same machine and everything worked as expected.
I will report a bug.