NAS540: Volume down, no option to repair

2

All Replies

  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    From this information that volume seems healthy.
    The raidarray md4 is build from sdc3, and is up.
    Yet there is something strange with md0 and md1 (firmware and swap). Normally a 4 disk ZyXEL nas will create a 4 member raid1 array for these volumes, and the 4 disks have roles 0,1,2 and 3 in that array.
    But here we have role 0,1 and 4:
    
    md0 : active raid1 sda1[0] sdb1[1] sdc1[4]
    *Maybe* the firmware expects role 3 here, and is whining because it can't find that.

    It is possible to repair that manually.
    su
    mdadm /dev/md0  -fail /dev/sdc1 --remove /dev/sdc1
    mdadm /dev/md1 --fail /dev/sdc2 --remove /dev/sdc2
    mdadm /dev/md0 --fail detached --remove detached
    mdadm /dev/md1 --fail detached --remove detached
    
    After this both md0 and md1 should be a 2 member raid1 array, something like
    
    md0 : active raid1 sda1[0] sdb1[1] 
          1997760 blocks super 1.2 [2/2] [UU]
    
    Now you can add sdc again:
    mdadm /dev/md0 --add /dev/sdc1
    mdadm /dev/md1 --add /dev/sdc2
    
    I expect now a 3 disk array:
    md0 : active raid1 sda1[0] sdb1[1] sdc1[2]
          1997760 blocks super 1.2 [3/3] [UUU]
    
    And finally you can add a (missing) 4th member
    mdadm /dev/md0 --add missing
    mdadm /dev/md1 --add missing
    
    And I expect:
    md0 : active raid1 sda1[0] sdb1[1] sdc1[2]
          1997760 blocks super 1.2 [4/3] [UUU_]
    
    Don't know if that helps, though.
  • aranel42
    aranel42 Posts: 10  Freshman Member
    And just because I'm a complete noob when it comes to stuff like this; will this touch the data on the disks in slot 1 and 2? I'm guessing not as I presume this will rebuild how the NAS sees drives and doesn't touch the actual drives, but I wanted to confirm before I do anything :smile:
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Indeed these commands are not supposed to do anything with the data in slot1&2.
    That data is in the raid array md2 and md3, which use the partitions sda3 and sdb3. These are not touched by the commands.
  • aranel42
    aranel42 Posts: 10  Freshman Member
    Many thanks for your continued help.
    I started to run the commands, but are getting stuck on readding sdc. When I run
    mdadm /dev/md0 --add /dev/sdc1
    I get this message:
    mdadm: Cannot open /dev/sdc1: Device or resource busy
    I'm finding quite a few results with this error on Google, but am not really versed enough in Linux or raid setup to make head or tail of the results, and don't really want to enter commands willy nilly.
    Worth noting is also that I got a reply from Zyxel support and was told that they would see if they could repro and/or investigate it on Monday.
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Did 'get /proc/mdstat' show the expected output? Strange. What keeps sdc1 busy? Unless it is automagically re-added to the array by the firmware after you removed it, I wouldn't know.


  • aranel42
    aranel42 Posts: 10  Freshman Member
    edited March 2019
    Hmm, most have been some temporary thing. I was away for a day and retried your suggested commands today, and now everything worked. The only thing I got an error on was
    mdadm /dev/md0 --add missing
    where I got a message saying
    mdadm: 'missing' only meaningful with --re-add
    which I tried instead, and gave no error.
    Here's my whole input log just for documentations sake:
    ~ # mdadm /dev/md0 --fail /dev/sdc1 --remove /dev/sdc1<br>mdadm: set /dev/sdc1 faulty in /dev/md0<br>mdadm: hot removed /dev/sdc1 from /dev/md0<br>~ # mdadm /dev/md1 --fail /dev/sdc2 --remove /dev/sdc2<br>mdadm: set /dev/sdc2 faulty in /dev/md1<br>mdadm: hot removed /dev/sdc2 from /dev/md1<br>~ # mdadm /dev/md0 --fail detached --remove detached<br>~ # mdadm /dev/md1 --fail detached --remove detached<br>~ # mdadm /dev/md0 --add /dev/sdc1<br>mdadm: added /dev/sdc1<br>~ # mdadm /dev/md1 --add /dev/sdc2<br>mdadm: added /dev/sdc2<br>~ # mdadm /dev/md0 --add missing<br>mdadm: 'missing' only meaningful with --re-add<br>~ # mdadm /dev/md0 --re-add missing<br>~ # mdadm /dev/md1 --re-add missing
    And here's the output from mdstat and partitions again:
    ~ # cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]<br>md4 : active raid1 sdc3[0]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 240067392 blocks super 1.2 [1/1] [U]<br><br>md3 : active raid1 sdb3[0]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3902886720 blocks super 1.2 [1/1] [U]<br><br>md2 : active raid1 sda3[0]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3902886720 blocks super 1.2 [1/1] [U]<br><br>md1 : active raid1 sdc2[4] sda2[0] sdb2[1]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1998784 blocks super 1.2 [4/3] [UU_U]<br><br>md0 : active raid1 sdc1[4] sda1[0] sdb1[1]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1997760 blocks super 1.2 [4/3] [UU_U]<br><br>unused devices: <none><br>~ # cat /proc/partitions<br>major minor&nbsp; #blocks&nbsp; name<br><br>&nbsp;&nbsp; 7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;&nbsp;&nbsp;&nbsp; 147456 loop0<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 256 mtdblock0<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 512 mtdblock1<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 256 mtdblock2<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10240 mtdblock3<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10240 mtdblock4<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5&nbsp;&nbsp;&nbsp;&nbsp; 112640 mtdblock5<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10240 mtdblock6<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7&nbsp;&nbsp;&nbsp;&nbsp; 112640 mtdblock7<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6144 mtdblock8<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 3907018584 sda<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp; 1998848 sda1<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;&nbsp;&nbsp; 1999872 sda2<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3 3903017984 sda3<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 16 3907018584 sdb<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 17&nbsp;&nbsp;&nbsp; 1998848 sdb1<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 18&nbsp;&nbsp;&nbsp; 1999872 sdb2<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 19 3903017984 sdb3<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 32&nbsp; 244198584 sdc<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 33&nbsp;&nbsp;&nbsp; 1998848 sdc1<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 34&nbsp;&nbsp;&nbsp; 1999872 sdc2<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 35&nbsp; 240198656 sdc3<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 48&nbsp; 244198584 sdd<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 49&nbsp;&nbsp;&nbsp; 1998848 sdd1<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 50&nbsp;&nbsp;&nbsp; 1999872 sdd2<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9&nbsp;&nbsp;&nbsp;&nbsp; 102424 mtdblock9<br>&nbsp;&nbsp; 9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;&nbsp;&nbsp; 1997760 md0<br>&nbsp;&nbsp; 9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp; 1998784 md1<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4464 mtdblock10<br>&nbsp;&nbsp; 9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2 3902886720 md2<br>&nbsp;&nbsp; 9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3 3902886720 md3<br>&nbsp;&nbsp; 9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4&nbsp; 240067392 md4<br>~ #<br>
    which looks like it has the same output as in my previous post a couple of days ago, and I still get the volume down message on disk 3 when I log into the web UI.
    Not sure if it makes any sense, but again, thanks for your help
  • aranel42
    aranel42 Posts: 10  Freshman Member
    I did some more testing after a promt from Zyxel support. It seems like I get the volume down error on a new volume whenever I already have a volume created.
    1 disk in any slot with a basic volume -> No error
    2 disks in any slots with a raid1 volume -> No error
    2 or more disks in any slots with a basic volume each -> Error on the volume that was created last
    I also tried to do another factoyr reset, as well as reinstalling the firmware, and I still get errors on the above pattern.
  • hola
    hola Posts: 5  Freshman Member
    edited December 2019
    Dear @Mijzelf,

    Here I encountered the same problem, volume down (RAID 5), no option to repair and no accessible with my data. Is there any SOP commands help to repair my volume?  :'(
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    SOP?

    Anyway, this forum is falling apart. Some of the listings which were there are lost.

    Can you enable the ssh server, login over ssh as root (admin password), and post the output of

    cat /proc/mdstat
    mdadm --examine /dev/sd[abcd]3

  • hola
    hola Posts: 5  Freshman Member
    # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
    md2 : inactive sda3[0](S) sdd3[4](S) sdc3[2](S)
          11708660736 blocks super 1.2
           
    md1 : active raid1 sdb2[5](F) sda2[0] sdd2[4] sdc2[2]
          1998784 blocks super 1.2 [4/3] [U_UU]
          
    md0 : active raid1 sdb1[5](F) sda1[0] sdd1[4] sdc1[2]
          1997760 blocks super 1.2 [4/3] [U_UU]
          
    unused devices: <none>

    # mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 14e886d5:f3e1b36a:b3cd558d:574f21a2
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Fri Sep 20 13:17:43 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 3fe9b787:1a0fa640:04a63619:8a65fb5b

        Update Time : Thu Nov 21 22:12:54 2019
           Checksum : 1b9dbffb - correct
             Events : 100

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 0
       Array State : A.A. ('A' == active, '.' == missing)
    mdadm: No md superblock detected on /dev/sdb3.
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 14e886d5:f3e1b36a:b3cd558d:574f21a2
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Fri Sep 20 13:17:43 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 930a65f9:196ec7bc:dc154c61:76ab86ea

        Update Time : Thu Nov 21 22:12:54 2019
           Checksum : e00cf615 - correct
             Events : 100

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : A.A. ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 14e886d5:f3e1b36a:b3cd558d:574f21a2
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Fri Sep 20 13:17:43 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : ccf2036a:a7ac4e63:9aa38f9d:165784aa

        Update Time : Thu Nov 21 22:12:54 2019
           Checksum : f374563b - correct
             Events : 100

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : spare
       Array State : A.A. ('A' == active, '.' == missing)
     

Consumer Product Help Center