NAS540: Volume down, no option to repair

13»

All Replies

  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Right. Your second disk is dead, I'm afraid, but the remaining disks should be enough to rebuild the array (degraded). Unfortunately you disk sdd3 has lost it's device role, it's now 'spare', but assuming the disks are in the same physical position as when you created the array, I assume that should be 'Active device 3'.

    In that case the command to re-build the array is
    mdadm --stop /dev/md2
    mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric --bitmap=none /dev/md2 /dev/sda3 missing /dev/sdc3 /dev/sdd3

    That are two lines, each starting with mdadm. I don't dare to put it in code tags, as this forum is acting strange.
    When your array is up again, you can pull the second disk, and put a new disk in to get it healthy again.


  • hola
    hola Posts: 5  Freshman Member
    Thank you for the great assistance.

    mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric --bitmap=none /dev/md2 /dev/sda3 missing /dev/sdc3 /dev/sdd3
    I got the error, it said need to use --grow?
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Omit the --bitmap=none
  • katas
    katas Posts: 9  Freshman Member
    edited January 2020
    I encountered the similar problem on my NAS540 with RAID 6.
    Can somebody help me to solve it?

    # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md1 : active raid1 sda2[0] sdc2[5] sdd2[6] sdb2[4]
    1998784 blocks super 1.2 [4/4] [UUUU]

    md0 : active raid1 sda1[0] sdc1[5] sdd1[6] sdb1[4]
    1997760 blocks super 1.2 [4/4] [UUUU]

    # cat /proc/partitions
    major minor #blocks name

    7 0 147456 loop0
    31 0 256 mtdblock0
    31 1 512 mtdblock1
    31 2 256 mtdblock2
    31 3 10240 mtdblock3
    31 4 10240 mtdblock4
    31 5 112640 mtdblock5
    31 6 10240 mtdblock6
    31 7 112640 mtdblock7
    31 8 6144 mtdblock8
    8 0 5860522584 sda
    8 1 1998848 sda1
    8 2 1999872 sda2
    8 3 5856522240 sda3
    8 16 5860522584 sdb
    8 17 1998848 sdb1
    8 18 1999872 sdb2
    8 19 5856522240 sdb3
    8 32 5860522584 sdc
    8 33 1998848 sdc1
    8 34 1999872 sdc2
    8 35 5856522240 sdc3
    8 48 5860522584 sdd
    8 49 1998848 sdd1
    8 50 1999872 sdd2
    8 51 5856522240 sdd3
    31 9 102424 mtdblock9
    9 0 1997760 md0
    9 1 1998784 md1
    31 10 4464 mtdblock10

    # mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x4
    Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63
    Name : NAS540:2 (local to host NAS540)
    Creation Time : Sat Dec 14 05:08:13 2019
    Raid Level : raid6
    Raid Devices : 4

    Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)
    Array Size : 11712781952 (11170.18 GiB 11993.89 GB)
    Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)
    Data Offset : 262144 sectors
    Super Offset : 8 sectors
    State : clean
    Device UUID : 57579709:2a1053f9:4e0e390f:bcef6466

    Reshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)
    New Layout : left-symmetric

    Update Time : Sat Dec 21 05:08:45 2019
    Checksum : aeda0bb6 - correct
    Events : 2801362

    Layout : left-symmetric-6
    Chunk Size : 64K

    Device Role : Active device 0
    Array State : AAAA ('A' == active, '.' == missing)
    /dev/sdb3:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x4
    Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63
    Name : NAS540:2 (local to host NAS540)
    Creation Time : Sat Dec 14 05:08:13 2019
    Raid Level : raid6
    Raid Devices : 4

    Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)
    Array Size : 11712781952 (11170.18 GiB 11993.89 GB)
    Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)
    Data Offset : 262144 sectors
    Super Offset : 8 sectors
    State : active
    Device UUID : 27e3c53c:450be072:4b3e069c:247f1661

    Reshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)
    New Layout : left-symmetric

    Update Time : Sat Dec 21 05:08:45 2019
    Checksum : e3145207 - correct
    Events : 2801362

    Layout : left-symmetric-6
    Chunk Size : 64K

    Device Role : Active device 1
    Array State : AAAA ('A' == active, '.' == missing)
    /dev/sdc3:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x4
    Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63
    Name : NAS540:2 (local to host NAS540)
    Creation Time : Sat Dec 14 05:08:13 2019
    Raid Level : raid6
    Raid Devices : 4

    Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)
    Array Size : 11712781952 (11170.18 GiB 11993.89 GB)
    Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)
    Data Offset : 262144 sectors
    Super Offset : 8 sectors
    State : active
    Device UUID : af400f15:def9e284:5c8da320:6b6f1d92

    Reshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)
    New Layout : left-symmetric

    Update Time : Sat Dec 21 05:08:45 2019
    Checksum : 8304dd81 - correct
    Events : 2801362

    Layout : left-symmetric-6
    Chunk Size : 64K

    Device Role : Active device 2
    Array State : AAAA ('A' == active, '.' == missing)
    /dev/sdd3:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x6
    Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63
    Name : NAS540:2 (local to host NAS540)
    Creation Time : Sat Dec 14 05:08:13 2019
    Raid Level : raid6
    Raid Devices : 4

    Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)
    Array Size : 11712781952 (11170.18 GiB 11993.89 GB)
    Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)
    Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Recovery Offset : 7602176000 sectors
    State : active
    Device UUID : 4e2d4e04:9660ec5a:0a69bd17:d20c07ef

    Reshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)
    New Layout : left-symmetric

    Update Time : Sat Dec 21 05:08:45 2019
    Checksum : 6170a9f2 - correct
    Events : 2801362

    Layout : left-symmetric-6
    Chunk Size : 64K

    Device Role : Active device 3
    Array State : AAAA ('A' == active, '.' == missing)


  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    What happens if you try to assemble it?

    su
    mdadm --assemble /dev/md2 /dev/sd[abcd]3

  • katas
    katas Posts: 9  Freshman Member
    edited January 2020
    Thank you for your reply. @Mijzelf
    I still have the problem after tried to assemble my NAS540 via command.

    # mdadm --assemble /dev/md2 /dev/sd[abcd]3
    mdadm: Failed to restore critical section for reshape, sorry.
    Possibly you needed to specify the --backup-file

    What should I do next?
    Do you know what would be the locations of the 'raid-file/backup' holding the configuration on the system?
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    what would be the locations of the 'raid-file/backup'

    I don't think such a file exists. But if it exists, it is in /firmware/mnt/sysdisk/, /firmware/mnt/nand/ or /etc/zyxel/, being the only storage pools outside your data array. The latter 2 are in flash memory.

    Your array was 'reshaping', and is stalled around 70%. Wat exactly was you doing?





  • katas
    katas Posts: 9  Freshman Member
    Your array was 'reshaping', and is stalled around 70%. Wat exactly was you doing?

    Don't know. What else can I do? :s
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Have you checked the locations I gave you?

    You don't know what you were doing. So this problem came out of the blue? Is it as far as you know possible that one disk was dropped from the array, and automatically added again? On raid5 that would look different, but unfortunately I have no experience with raid6 problems.

Consumer Product Help Center