NAS540 First one degraded disk, now volume gone.

Kuno
Kuno Posts: 25  Freshman Member
First Anniversary First Comment
Hi

I have a bit of a problem.
I have had a degraded disk for some time, but now the volume have crashed.
I pushed the "Reinitialise" button in disk management, and now I have two hot spare disks, and no volume.
I have read this post https://community.zyxel.com/en/discussion/6325/nas540-raid5-crash-gt-volume-lost
 and was hoping I could get one more disk online, so I could put a new disk in and repair.
I have run these commands:
 cat /proc/mdstat
cat /proc/partitions
cat /proc/mounts
su 
mdadm --examine /dev/sd[abcdef]3

And here is the result:

~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : inactive sdb3[0](S) sdf3[3](S)
      7805773824 blocks super 1.2

md1 : active raid1 sdb2[0] sdf2[3] sdc2[1]
      1998784 blocks super 1.2 [4/3] [UU_U]

md0 : active raid1 sdb1[0] sdf1[3] sdc1[1]
      1997760 blocks super 1.2 [4/3] [UU_U]

unused devices: <none>

~ $ cat /proc/partitions
major minor  #blocks  name

   7        0     147456 loop0
  31        0        256 mtdblock0
  31        1        512 mtdblock1
  31        2        256 mtdblock2
  31        3      10240 mtdblock3
  31        4      10240 mtdblock4
  31        5     112640 mtdblock5
  31        6      10240 mtdblock6
  31        7     112640 mtdblock7
  31        8       6144 mtdblock8
   8        0 1465138584 sda
   8        1 1465138550 sda1
   8       16 3907018584 sdb
   8       17    1998848 sdb1
   8       18    1999872 sdb2
   8       19 3903017984 sdb3
   8       32 3907018584 sdc
   8       33    1998848 sdc1
   8       34    1999872 sdc2
   8       35 3903017984 sdc3
   8       48 2930264064 sdd
   8       49 2930264030 sdd1
   8       64    4044975 sde
   8       80 3907018584 sdf
   8       81    1998848 sdf1
   8       82    1999872 sdf2
   8       83 3903017984 sdf3
  31        9     102424 mtdblock9
   9        0    1997760 md0
   9        1    1998784 md1
  31       10       4464 mtdblock10


~ $ cat /proc/mounts
rootfs / rootfs rw 0 0
/proc /proc proc rw,relatime 0 0
/sys /sys sysfs rw,relatime 0 0
none /proc/bus/usb usbfs rw,relatime 0 0
devpts /dev/pts devpts rw,relatime,mode=600 0 0
ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0
/dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0
/dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0
/dev/sda1 /e-data/74039e2c2d0d922fad13bf0c6334d20d tntfs rw,relatime,uid=99,gid=0,umask=00,nls=utf8,min_prealloc_size=64k,max_prealloc_size=1465138548,readahead=4M,user_xattr,case_sensitive,fail_safe,hidden=show,dotfile=show,errors=continue,mft_zone_multiplier=1 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/sdd1 /e-data/99984a75a6099187884410a8c3dc23cc tntfs rw,relatime,uid=99,gid=0,umask=00,nls=utf8,min_prealloc_size=64k,max_prealloc_size=2930264028,readahead=4M,user_xattr,case_sensitive,reset_journal,fail_safe,hidden=show,dotfile=show,errors=continue,mft_zone_multiplier=1 0 0


~ # mdadm --examine /dev/sd[abcdef]3
mdadm: cannot open /dev/sda3: No such device or address
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 47bf8f23:7fad834c:1e963868:2f92844e
           Name : NAS540:2  (local to host NAS540)
  Creation Time : Fri Nov 20 23:22:39 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 7e94d1ab:bd4bcda4:a3b3f2d3:33b4f07c

    Update Time : Wed Feb  3 17:06:03 2021
       Checksum : 31ca9026 - correct
         Events : 473740

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : A..A ('A' == active, '.' == missing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 47bf8f23:7fad834c:1e963868:2f92844e
           Name : NAS540:2  (local to host NAS540)
  Creation Time : Fri Nov 20 23:22:39 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660160 (11166.25 GiB 11989.67 GB)
  Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 487860848 sectors
          State : active
    Device UUID : c8bfbe19:b1e8071a:6847d69e:ac94f267

    Update Time : Fri Jun 19 13:29:30 2020
       Checksum : e6b533e4 - correct
         Events : 37652

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing)
mdadm: cannot open /dev/sdd3: No such device or address
mdadm: cannot open /dev/sde3: No such device or address
/dev/sdf3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 47bf8f23:7fad834c:1e963868:2f92844e
           Name : NAS540:2  (local to host NAS540)
  Creation Time : Fri Nov 20 23:22:39 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : ad37fb23:abd08ef6:c18140b4:92cbcbe9

    Update Time : Wed Feb  3 17:06:03 2021
       Checksum : 48dea373 - correct
         Events : 473740

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : A..A ('A' == active, '.' == missing)

Sorry for the long post. Any help will be very much appreciated.

Yours sincerly
Lars.
«1

All Replies

  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Do you use that NAS a lot? The problem is, sdc3 is dropped from the array at 19 june 2020, while the array went down at 3 february 2021. So the pieces won't 'fit' anymore. If the NAS was not in use that time, it might be possible to rebuild the array, but if you normally used it, you'll have corrupted files and a corrupted filesystem.


  • Kuno
    Kuno Posts: 25  Freshman Member
    First Anniversary First Comment
    I mostly read from it, not a lot of writing. 
  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Well, you can try. Everything you wrote after 19 june 2020 can be considered lost. Even if the file seems to be there, when it's bigger than 128kB it is not possible that it's not corrupted.
    It is possible that the array will be read only, as the default action on detection of a corrupt filesystem is to remount it read only.

    sdb3 is active device 0, sdc3 is active device 1, and sdf3 is active device 3. So device 2 is missing. The command to rebuild the array is

    mdadm --stop /dev/md2
    mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sdb3 /dev/sdc3 missing /dev/sdf3

    (that are two lines, both starting with mdadm)
    It is possible that the first command will error out. /proc/mdstat shows /dev/md2, and it's a bit fuzzy if you have to stop it or not, in that case.

  • Kuno
    Kuno Posts: 25  Freshman Member
    First Anniversary First Comment
    I now have md2 as an active raid5 array. 
    md2 : active raid5 sdf3[3] sdc3[1] sdb3[0]
          11708660160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UU_U]
    Can I replace the missing disk whit a new disk? And is there a way to see witch psycical disk it is, that is missing?
    And thanks a lot for your help.

  • Kuno
    Kuno Posts: 25  Freshman Member
    First Anniversary First Comment
    How do I mount the the array again? When I go into the Zyxel control panel, it says "please insert disk before making an volume".
  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    You can mount the array by

    mkdir /mnt/mountpoint
    mount /dev/md2 /mnt/mountpoint

    But the box should automount on a reboot. Of course it's possible that severe filesystem errors will prevent an automount (or a mount).
    Can I replace the missing disk whit a new disk?
    Yes, but I don't recommend it. You array has a corrupted filesystem, and I don't know if filesystem repair tools will be able to repair that. If you add redundancy (a 4th disk) you will simply make your corrupt filesystem redundant. You'd better backup all files (you need a backup anyway), delete the volume and create a new one. Then put the files back. I think that's the only reliable way to get a consistent filesystem again.
    When you do want to add redundancy, the webinterface will off you to rebuild the array as soon as the volume is recognized and you inserted a new disk.
    And is there a way to see witch psycical disk it is, that is missing?
    Don't the LED's show it? Or the webinterface? My guess is that it's the 3th disk, as the third device is missing in the array. The array is originally built in sequence. But after that you can shuffle the disks without problems, so that is no guarantee.
    The disk is no longer part of any array, nor does it show up (recognizably) in /proc/partitions, so it's dead.
    After some time of disk access I'd expect it to be the coldest one, or the hottest one, depending on how it is dead. In both cases it should clearly deviate from the others.
    If needed you can test all disks outside the box. I expect one of the disks to be not recognized, or wrongly recognized. (Your disk sde is not mounted, and 4GB in size. So that is an SD cart or USB stick without (supported) filesystem, or the remains of your dead disk.)
     
  • Kuno
    Kuno Posts: 25  Freshman Member
    First Anniversary First Comment
    I will do as you suggest and get all of the NAS and then make a new array with a new disk.
    Thanks a lot for all your help.
  • Kuno
    Kuno Posts: 25  Freshman Member
    First Anniversary First Comment
    Hi again.

    When I try to mount I get this message:
    mount: wrong fs type, bad option, bad superblock on /dev/md2,
  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    edited February 2021
    Can you post the output of

    mdadm --examine /dev/sd[abcdef]3
    cat /proc/partitions

  • Kuno
    Kuno Posts: 25  Freshman Member
    First Anniversary First Comment
    Here is the output:

    dadm --examine /dev/sd[abcdef]3 :

    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 3f54cc40:f80a2eac:852f5bc9:9218f078
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Sat Feb 13 20:26:51 2021
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660160 (11166.25 GiB 11989.67 GB)
      Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 099f93b9:acf2b940:6a8b7a58:8d5024db

        Update Time : Sat Feb 27 22:23:47 2021
           Checksum : d09e053f - correct
             Events : 12

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 0
       Array State : AA.A ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 3f54cc40:f80a2eac:852f5bc9:9218f078
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Sat Feb 13 20:26:51 2021
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660160 (11166.25 GiB 11989.67 GB)
      Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 8946215f:abe55fac:5da49137:a5bf5ebf

        Update Time : Sat Feb 27 22:23:47 2021
           Checksum : a52327ca - correct
             Events : 12

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : AA.A ('A' == active, '.' == missing)
    mdadm: cannot open /dev/sdc3: No such device or address
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 3f54cc40:f80a2eac:852f5bc9:9218f078
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Sat Feb 13 20:26:51 2021
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660160 (11166.25 GiB 11989.67 GB)
      Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 429dc1d1:89752749:17a9583f:eb883655

        Update Time : Sat Feb 27 22:23:47 2021
           Checksum : 5229dc63 - correct
             Events : 12

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : AA.A ('A' == active, '.' == missing)
    mdadm: cannot open /dev/sde3: No such device or address
    mdadm: cannot open /dev/sdf3: No such device or address

    cat /proc/partitions :

    major minor  #blocks  name

       7        0     147456 loop0
      31        0        256 mtdblock0
      31        1        512 mtdblock1
      31        2        256 mtdblock2
      31        3      10240 mtdblock3
      31        4      10240 mtdblock4
      31        5     112640 mtdblock5
      31        6      10240 mtdblock6
      31        7     112640 mtdblock7
      31        8       6144 mtdblock8
       8        0 3907018584 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3 3903017984 sda3
       8       16 3907018584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 3903017984 sdb3
       8       32    4044975 sdc
       8       48 3907018584 sdd
       8       49    1998848 sdd1
       8       50    1999872 sdd2
       8       51 3903017984 sdd3
      31        9     102424 mtdblock9
       9        0    1997760 md0
       9        1    1998784 md1
      31       10       4464 mtdblock10
       9        2 11708660160 md2

    And again, thank you very much for all you help.
    Lars

Consumer Product Help Center