NAS540 RAID5 Volume Down

Options
Hello everybody, hoping somebody can help me with an issue that I recently had on my NAS540.
The NAS540 has four identical 8tb drives installed under RAID5.  I updated it to the most recent firmware about 3 weeks ago.  Last week I was backing up the NAS540 to another NAS and after it was about 95% done copying the NAS540 made a tone and the copying stopped.   Entering the web interface shows that the volumes are down and it tells me to repair but there is no option to repair.  The status of all of the drives are Normal and SMART status shows that all of the drives are Good.  When I checked /proc/mdstat it showed to be resyncing so I let it continue to completion.  After three days the resyncing was done but the volume was still down.  Any help would be appreciated.  Below is some info that I gathered on the drives/array:
~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1]
23429685696 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid1 sda2[0] sdd2[5] sdc2[4] sdb2[1]
1998784 blocks super 1.2 [4/4] [UUUU]
md0 : active raid1 sda1[0] sdd1[5] sdc1[4] sdb1[1]
1997760 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
~ # cat /proc/partitions
major minor #blocks name
7        0     146432 loop0
31 0 256 mtdblock0
31 1 512 mtdblock1
31 2 256 mtdblock2
31 3 10240 mtdblock3
31 4 10240 mtdblock4
31 5 112640 mtdblock5
31 6 10240 mtdblock6
31 7 112640 mtdblock7
31 8 6144 mtdblock8
8 0 7814026584 sda
8 1 1998848 sda1
8 2 1999872 sda2
8 3 7810026496 sda3
8 16 7814026584 sdb
8 17 1998848 sdb1
8 18 1999872 sdb2
8 19 7810026496 sdb3
8 32 7814026584 sdc
8 33 1998848 sdc1
8 34 1999872 sdc2
8 35 7810026496 sdc3
8 48 7814026584 sdd
8 49 1998848 sdd1
8 50 1999872 sdd2
8 51 7810026496 sdd3
31 9 102424 mtdblock9
9 0 1997760 md0
9 1 1998784 md1
31 10 4464 mtdblock10
9 2 23429685696 md2
253 0 102400 dm-0
253 1 17178820608 dm-1
253 2 6250762240 dm-2
~ # mdadm --examine /dev/sd[abcd]3
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 0a21dc88:5dd8dc0a:f1ee797f:b2a288fe
Name : NAS540:2 (local to host NAS540)
Creation Time : Sun Nov 20 14:31:11 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15619790848 (7448.10 GiB 7997.33 GB)
Array Size : 23429685696 (22344.29 GiB 23992.00 GB)
Used Dev Size : 15619790464 (7448.10 GiB 7997.33 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 41a16c1f:1472f01a:4666611b:88a13aba
Update Time : Mon Dec  4 20:44:20 2023
   Checksum : 41252982 - correct
     Events : 32
     Layout : left-symmetric
 Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 0a21dc88:5dd8dc0a:f1ee797f:b2a288fe
Name : NAS540:2 (local to host NAS540)
Creation Time : Sun Nov 20 14:31:11 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15619790848 (7448.10 GiB 7997.33 GB)
Array Size : 23429685696 (22344.29 GiB 23992.00 GB)
Used Dev Size : 15619790464 (7448.10 GiB 7997.33 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : ba1e589d:9a7f8d74:1ec11c74:ef028ed1
Update Time : Mon Dec  4 20:44:20 2023
   Checksum : 88bc70c2 - correct
     Events : 32
     Layout : left-symmetric
 Chunk Size : 64K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 0a21dc88:5dd8dc0a:f1ee797f:b2a288fe
Name : NAS540:2 (local to host NAS540)
Creation Time : Sun Nov 20 14:31:11 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15619790848 (7448.10 GiB 7997.33 GB)
Array Size : 23429685696 (22344.29 GiB 23992.00 GB)
Used Dev Size : 15619790464 (7448.10 GiB 7997.33 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 2889ac89:82ced3d6:ff391e10:00ff81b5
Update Time : Mon Dec  4 20:44:20 2023
   Checksum : 574c9f0b - correct
     Events : 32
     Layout : left-symmetric
 Chunk Size : 64K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 0a21dc88:5dd8dc0a:f1ee797f:b2a288fe
Name : NAS540:2 (local to host NAS540)
Creation Time : Sun Nov 20 14:31:11 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15619790848 (7448.10 GiB 7997.33 GB)
Array Size : 23429685696 (22344.29 GiB 23992.00 GB)
Used Dev Size : 15619790464 (7448.10 GiB 7997.33 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 3d36c62e:6723fc7e:78e967e5:47b9981e
Update Time : Mon Dec  4 20:44:20 2023
   Checksum : e2ef11b5 - correct
     Events : 32
     Layout : left-symmetric
 Chunk Size : 64K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing)
~ # mdadm --assemble --scan --verbose
mdadm main: failed to get exclusive lock on mapfile
mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/dm-2
mdadm: no recogniseable superblock on /dev/dm-1
mdadm: no recogniseable superblock on /dev/dm-0
mdadm: no recogniseable superblock on /dev/md2
mdadm: cannot open device /dev/mtdblock10: Device or resource busy
mdadm: no recogniseable superblock on /dev/md1
mdadm: no recogniseable superblock on /dev/md0
mdadm: cannot open device /dev/mtdblock9: Device or resource busy
mdadm: /dev/sdd3 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdd
mdadm: /dev/sdc3 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdc
mdadm: /dev/sdb3 is busy - skipping
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdb
mdadm: /dev/sda3 is busy - skipping
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: no recogniseable superblock on /dev/mtdblock8
mdadm: no recogniseable superblock on /dev/mtdblock7
mdadm: no recogniseable superblock on /dev/mtdblock6
mdadm: no recogniseable superblock on /dev/mtdblock5
mdadm: no recogniseable superblock on /dev/mtdblock4
mdadm: no recogniseable superblock on /dev/mtdblock3
mdadm: no recogniseable superblock on /dev/mtdblock2
mdadm: no recogniseable superblock on /dev/mtdblock1
mdadm: no recogniseable superblock on /dev/mtdblock0
mdadm: no recogniseable superblock on /dev/loop0
mdadm: No arrays found in config file or automatically

Best Answers

  • Mijzelf
    Mijzelf Posts: 2,639  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Options

    Your raid array is not down. But maybe there is a blocking filesystem error on the data block. Your data should be on devices /dev/dm-1 and /dev/dm-2. Are they mounted?

    cat /proc/mounts
    

    If not, try to mount it manually

    mkdir /tmp/mountpoint
    su
    mount /dev/dm-1 /tmp/mountpoint
    

    If that fails, have a look in the last lines of the kernel log (dmesg) to see if it tells why.

  • Mijzelf
    Mijzelf Posts: 2,639  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Options

    So there are filesystem errors. You can try to repair that:

    su
    e2fsck /dev/dm-2
    

    When repairing succeeded reboot the box to let the firmware mount it.

All Replies

  • Mijzelf
    Mijzelf Posts: 2,639  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Options

    Your raid array is not down. But maybe there is a blocking filesystem error on the data block. Your data should be on devices /dev/dm-1 and /dev/dm-2. Are they mounted?

    cat /proc/mounts
    

    If not, try to mount it manually

    mkdir /tmp/mountpoint
    su
    mount /dev/dm-1 /tmp/mountpoint
    

    If that fails, have a look in the last lines of the kernel log (dmesg) to see if it tells why.

  • Elyas
    Elyas Posts: 3
    First Comment
    Options
    Hi Mijzelf, appreciate your reply.
    
    cat /proc/mounts shows that /dev/dm-1 and /dev/dm-2 were not mounted.
    
    Manually trying to mount /dev/dm-1 worked and I saw everything.
    
    Manually trying to mount /dev/dm-2 did not work and got this error:
    
    ~ # mount /dev/dm-2 /tmp/mountpoint
    mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_0a21dc88-lv_1c60db89,
    missing codepage or helper program, or other error
    In some cases useful info is found in syslog - try
    dmesg | tail or so.
    Running dmesg | tail gave me this info:
    
    ~ # dmesg | tail
    [ 55.465174] EXT4-fs (dm-2): group descriptors corrupted!
    [ 84.517561] bz time = 1
    [ 84.520019] bz status = 1
    [ 84.522643] bz_timer_status = 0
    [ 84.525790] start buzzer
    [ 283.910903] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null)
    [ 329.767392] EXT4-fs (dm-2): ext4_check_descriptors: Block bitmap for group 128 not in group (block 2459703844)!
    [ 329.777560] EXT4-fs (dm-2): group descriptors corrupted!
    [ 392.872822] EXT4-fs (dm-2): ext4_check_descriptors: Block bitmap for group 128 not in group (block 2459703844)!
    [ 392.882971] EXT4-fs (dm-2): group descriptors corrupted!

  • Mijzelf
    Mijzelf Posts: 2,639  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Options

    So there are filesystem errors. You can try to repair that:

    su
    e2fsck /dev/dm-2
    

    When repairing succeeded reboot the box to let the firmware mount it.

  • Elyas
    Elyas Posts: 3
    First Comment
    Options

    Hi Mijzelf, I ran e2fsck /dev/dm-2 and after about 4 hours the process completed. Rebooted the NAS540 and the volumes mounted just fine. All data is accessible. I greatly appreciate the time you spent looking at and responding to my post.

Consumer Product Help Center