NAS540 First one degraded disk, now volume gone

Hi all, 

I have the almost same issue as in this post:
https://community.zyxel.com/en/discussion/9843/nas540-first-one-degraded-disk-now-volume-gone

Nevertheless, I didn't push anything.
My wife saw the error after it started beeping randomly 2 days ago in the morning.
She did click on repair in the GUI but it just said that Disk1 needs to be replaced.

So far I have not replaced anything. I did power the device off and I pulled the disks out as they were running extremely hot. 
I did put them back today and started the NAS again.
It doesn't show any Volumes anymore, my RAID5 seems to have completely disappeared.
I am not sure whether that was a good idea to shut it down, but I had no more access to the GUI and had SSH not enabled until after the restart :(

I did the following on the command line:

 
<div>$ cat /proc/mdstat</div><div>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]</div><div>md1 : active raid1 sda2[4] sdb2[1] sdd2[3] sdc2[2]</div><div>&nbsp; &nbsp; &nbsp; 1998784 blocks super 1.2 [4/4] [UUUU]</div><div><br></div><div>md0 : active raid1 sdb1[1] sdd1[3] sdc1[2] sda1[0]</div><div>&nbsp; &nbsp; &nbsp; 1997760 blocks super 1.2 [4/4] [UUUU]</div>
<div>~ $ cat /proc/mounts</div><div>rootfs / rootfs rw 0 0</div><div>/proc /proc proc rw,relatime 0 0</div><div>/sys /sys sysfs rw,relatime 0 0</div><div>none /proc/bus/usb usbfs rw,relatime 0 0</div><div>devpts /dev/pts devpts rw,relatime,mode=600 0 0</div><div>ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0</div><div>/dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0</div><div>/dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0</div><div>/dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0</div><div>/dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0</div><div>/dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0</div><div>/dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0</div><div>/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0</div><div>/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0</div><div>ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0</div><div>configfs /sys/kernel/config configfs rw,relatime 0 0</div>
<div>cat /proc/partitions</div><div>major minor&nbsp; #blocks&nbsp; name</div><div><br></div><div>&nbsp; &nbsp;7&nbsp; &nbsp; &nbsp; &nbsp; 0&nbsp; &nbsp; &nbsp;146432 loop0</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 0&nbsp; &nbsp; &nbsp; &nbsp; 256 mtdblock0</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 1&nbsp; &nbsp; &nbsp; &nbsp; 512 mtdblock1</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 2&nbsp; &nbsp; &nbsp; &nbsp; 256 mtdblock2</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 3&nbsp; &nbsp; &nbsp; 10240 mtdblock3</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 4&nbsp; &nbsp; &nbsp; 10240 mtdblock4</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 5&nbsp; &nbsp; &nbsp;112640 mtdblock5</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 6&nbsp; &nbsp; &nbsp; 10240 mtdblock6</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 7&nbsp; &nbsp; &nbsp;112640 mtdblock7</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 8&nbsp; &nbsp; &nbsp; &nbsp;6144 mtdblock8</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp; 0 3907018584 sda</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp; 1&nbsp; &nbsp; 1998848 sda1</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp; 2&nbsp; &nbsp; 1999872 sda2</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp; 3 3903017984 sda3</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;16 3907018584 sdb</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;17&nbsp; &nbsp; 1998848 sdb1</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;18&nbsp; &nbsp; 1999872 sdb2</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;19 3903017984 sdb3</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;32 3907018584 sdc</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;33&nbsp; &nbsp; 1998848 sdc1</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;34&nbsp; &nbsp; 1999872 sdc2</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;35 3903017984 sdc3</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;48 3907018584 sdd</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;49&nbsp; &nbsp; 1998848 sdd1</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;50&nbsp; &nbsp; 1999872 sdd2</div><div>&nbsp; &nbsp;8&nbsp; &nbsp; &nbsp; &nbsp;51 3903017984 sdd3</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp; 9&nbsp; &nbsp; &nbsp;102424 mtdblock9</div><div>&nbsp; &nbsp;9&nbsp; &nbsp; &nbsp; &nbsp; 0&nbsp; &nbsp; 1997760 md0</div><div>&nbsp; &nbsp;9&nbsp; &nbsp; &nbsp; &nbsp; 1&nbsp; &nbsp; 1998784 md1</div><div>&nbsp; 31&nbsp; &nbsp; &nbsp; &nbsp;10&nbsp; &nbsp; &nbsp; &nbsp;4464 mtdblock10</div>


I didn't do anything else, since I have over 10 TB of data on here and I some of it is critical not to lose :( 

So any help is extremely appreciated. 
«1

All Replies

  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Can you post the output of

    su
    mdadm --examine /dev/sd[abcd]3


  • Siles
    Siles Posts: 8
    So happy you replied! 

    Here is the output  
     
     
    <div>mdadm --examine /dev/sd[abcd]3</div><div>/dev/sda3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x2</div><div>&nbsp; &nbsp; &nbsp;Array UUID : bcfb4bc3:a162dce1:dd989a89:5ae1c65d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : Gilgamesh:2&nbsp; (local to host Gilgamesh)</div><div>&nbsp; Creation Time : Mon Dec 21 21:00:15 2015</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid5</div><div>&nbsp; &nbsp;Raid Devices : 4</div><div><br></div><div>&nbsp;Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 11708660160 (11166.25 GiB 11989.67 GB)</div><div>&nbsp; Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>Recovery Offset : 0 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : active</div><div>&nbsp; &nbsp; Device UUID : 69016177:03bd82e4:0ee1db0b:cc2b24b5</div><div><br></div><div>&nbsp; &nbsp; Update Time : Thu Mar 10 21:36:42 2022</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 888f143c - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 1349</div><div><br></div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Layout : left-symmetric</div><div>&nbsp; &nbsp; &nbsp;Chunk Size : 64K</div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 0</div><div>&nbsp; &nbsp;Array State : AAAA ('A' == active, '.' == missing)</div><div>/dev/sdb3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : bcfb4bc3:a162dce1:dd989a89:5ae1c65d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : Gilgamesh:2&nbsp; (local to host Gilgamesh)</div><div>&nbsp; Creation Time : Mon Dec 21 21:00:15 2015</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid5</div><div>&nbsp; &nbsp;Raid Devices : 4</div><div><br></div><div>&nbsp;Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 11708660736 (11166.25 GiB 11989.67 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : 8a4a6e7d:9c5e56e0:383f9010:e7ad0cb3</div><div><br></div><div>&nbsp; &nbsp; Update Time : Sat Mar 12 12:58:43 2022</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 8d101b98 - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 5776</div><div><br></div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Layout : left-symmetric</div><div>&nbsp; &nbsp; &nbsp;Chunk Size : 64K</div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 1</div><div>&nbsp; &nbsp;Array State : .AAA ('A' == active, '.' == missing)</div><div>/dev/sdc3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : bcfb4bc3:a162dce1:dd989a89:5ae1c65d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : Gilgamesh:2&nbsp; (local to host Gilgamesh)</div><div>&nbsp; Creation Time : Mon Dec 21 21:00:15 2015</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid5</div><div>&nbsp; &nbsp;Raid Devices : 4</div><div><br></div><div>&nbsp;Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 11708660736 (11166.25 GiB 11989.67 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : 3233a5b5:94b10bab:0801bd38:af54f0a3</div><div><br></div><div>&nbsp; &nbsp; Update Time : Sat Mar 12 12:58:43 2022</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : a90cbfd1 - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 5776</div><div><br></div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Layout : left-symmetric</div><div>&nbsp; &nbsp; &nbsp;Chunk Size : 64K</div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 2</div><div>&nbsp; &nbsp;Array State : .AAA ('A' == active, '.' == missing)</div><div>/dev/sdd3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : bcfb4bc3:a162dce1:dd989a89:5ae1c65d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : Gilgamesh:2&nbsp; (local to host Gilgamesh)</div><div>&nbsp; Creation Time : Mon Dec 21 21:00:15 2015</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid5</div><div>&nbsp; &nbsp;Raid Devices : 4</div><div><br></div><div>&nbsp;Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 11708660736 (11166.25 GiB 11989.67 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : 682ce647:30acf1ce:4c2f82bd:6276c229</div><div><br></div><div>&nbsp; &nbsp; Update Time : Sat Mar 12 12:58:43 2022</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 69cb039b - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 5776</div><div><br></div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Layout : left-symmetric</div><div>&nbsp; &nbsp; &nbsp;Chunk Size : 64K</div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 3</div><div>&nbsp; &nbsp;Array State : .AAA ('A' == active, '.' == missing)</div><div></div>

  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Something strange is going on. Disk sda is dropped from the array after Mar 10 21:36:42 2022, but the other 3 members look healthy, last updated at Mar 12 12:58:43 2022 (UTC, I suppose), and I don't know why the array is not assembled.
    You can try to assemble it manually:

    su
    mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run

    After that, md2 should be available in /proc/partitions, and possibly your data is back. (The latter depends on if the firmware will detect 'hotplugging' a raidarray.)
  • Siles
    Siles Posts: 8
    Alright I will give it a shot, see if that get's it to run.

    Disk 1 does have a bad SMART and was also the reason why the Raid died, but I don't understand why, the NAS should've just dropped it and asked to replace it instead of killing the array :/ 

    I will get back to you after I ran he command

    Thanks again!!
  • Siles
    Siles Posts: 8
    mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run
    mdadm: /dev/md2 has been started with 3 drives (out of 4).


    Looks like it didn't take disk1 into account as it must be really totally dead.

    /proc # cat partitions
    major minor  #blocks  name

       7        0     146432 loop0
      31        0        256 mtdblock0
      31        1        512 mtdblock1
      31        2        256 mtdblock2
      31        3      10240 mtdblock3
      31        4      10240 mtdblock4
      31        5     112640 mtdblock5
      31        6      10240 mtdblock6
      31        7     112640 mtdblock7
      31        8       6144 mtdblock8
       8        0 3907018584 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3 3903017984 sda3
       8       16 3907018584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 3903017984 sdb3
       8       32 3907018584 sdc
       8       33    1998848 sdc1
       8       34    1999872 sdc2
       8       35 3903017984 sdc3
       8       48 3907018584 sdd
       8       49    1998848 sdd1
       8       50    1999872 sdd2
       8       51 3903017984 sdd3
      31        9     102424 mtdblock9
       9        0    1997760 md0
       9        1    1998784 md1
      31       10       4464 mtdblock10
       9        2 11708660736 md2

    Looks like it is there the md2

    So what do I do now?
    It says in the GUI that for some odd reason I cant put hot spare in https://prnt.sc/keQGY6LM3M8q
    Should I turn it off replace the faulty one, should I leave it on and replace the faulty one?
    Also I don't seem to see any data anywhere :( 
  • Siles
    Siles Posts: 8
    Do I need to create the Volumes again to access Data a the volumes are gone as well ? 
  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Looks like it didn't take disk1 into account as it must be really totally dead.
    No, that is because you told it to leave disk1, by only specifying 2..4: /dev/sd[bcd]3. The manager would deny to assemble the array if disk 1 was also added, as it's not in sync with the others.
    BTW, disk 1 is still used in md0 and md1 (firmware and swap), so it's not dead.
    It says in the GUI
    Don't expect the GUI to handle actions performed on the commandline smoothly.
    Do I need to create the Volumes again to access Data a the volumes are gone as well ? 
    No! Basically creating a volume means putting a new filesystem on a blockdevice (disk, or raid array). And a new filesystem is always empty.

    The firmware didn't handle hotplugging the raid array. Maybe it will handle mounting the filesystem:

    su
    mkdir /mnt/mountpoint
    mount /dev/md2 /mnt/mountpoint

    Now you should be able to see your data in /mnt/mountpoint.

    ls /mnt/mountpoint

    But maybe the firmware will detect the mount and move the mountpoint. In that case your volume should be back.
  • Siles
    Siles Posts: 8
    Alright so

    Mijzelf said:
    Looks like it didn't take disk1 into account as it must be really totally dead.
    No, that is because you told it to leave disk1, by only specifying 2..4: /dev/sd[bcd]3. The manager would deny to assemble the array if disk 1 was also added, as it's not in sync with the others.
    BTW, disk 1 is still used in md0 and md1 (firmware and swap), so it's not dead.

    I am blind, and not used to this type of work ^^"

    I did what you said

    ~ # mkdir /mnt/mountpoint
    ~ # mount /dev/md2 /mnt/mountpoint
    ~ # ls /mnt/mountpoint
    Software     admin        aquota.user  lost+found   music        photo        video
    ~ #

    So I see my folders are there.

    I do not have access via the network yet to access any of them 
    https://prnt.sc/kuGNUbhAT6Sn

    But so far this looks much more promising as I thought!
    What next step would you recommend?


  • Siles
    Siles Posts: 8
    Also in the GUI he changed the status from Crashed to degraded.
  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    As everything seems OK, I'd simply try to let the firmware do it's job. Shut down the box, remove disk 1, and power it on again.

    And if everything works, think about a backup strategy, as the box will fail again, you only don't know when. Raid is not a backup.

Consumer Product Help Center