NAS540: Volume down, no option to repair

aranel42
aranel42 Posts: 10  Freshman Member
edited March 2019 in Personal Cloud Storage
Hi all!
I have a NAS540 with 2x 250GB HDDs in slot 3 and 4 in RAID1. I was going to replace them with a single 2TB drive.
After making a backup of the data to an external drive I stupidly enough I pulled out the old drives, disregarding the warning tone that started to sound, and put in the new disk in slot 3. Then I went to the web interface and set up a new volume on that disk.
After the NAS restarted the warning tone went away, but I get an message about "Volume down" on slot 3. This happens regardless of which disks I use, either or both of the old 250GB drives, or the new 2TB drive. I also get no option to repair it, as the "Manage" button is greyed out.
This happens to any drives that gets put in slot 3 or 4, regardless if I create a basic or RAID volume.
I tried putting in both old drives and setting up a new RAID1 volume on them, but I still get the error, without option of rebuilding, on the next startup.
Any ideas of what to do, because I'm stumped? I'm guessing that the NAS still has an entry somewhere to the RAID setup and I need to get rid of that, but I can't find any way to do that.

#NAS_Mar_2019
«13

All Replies

  • Ijnrsi
    Ijnrsi Posts: 254  Master Member
    if you can create a the new volume by delete all content with your old/new disk, try to put the hard disk to windows/OSX to format the old file system information.
    Then put them back to NAS to create the volume again.
  • aranel42
    aranel42 Posts: 10  Freshman Member
    That did unfortunately not work. Started up an instance of Gparted to format all of the disks (the 2TB and the two 250GB ones). Put one of the 250GB ones back in slot 3 of the NAS, created a basic volume one it, waited for the restart, and it shows up as crashed as soon as I can access the web UI again.
  • Mijzelf
    Mijzelf Posts: 2,600  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Try a factory reset. That wipes the internal flash storage.
  • aranel42
    aranel42 Posts: 10  Freshman Member
    I currently also have two 4TB disks filled with data (set up as two basic volumes of 4TB each). Would I lose anything on them if I do an reset? And should I do the reset with or without any disks inside?
  • Ijnrsi
    Ijnrsi Posts: 254  Master Member
    The process will reset the configuration only, so data would not be effected.
    But you should remember to enable the shared folder you created in shared management in web GUI
  • aranel42
    aranel42 Posts: 10  Freshman Member
    edited March 2019
    Unfortunately factory resettting did not do anything either. I still get a "Volume down" error when the NAS has restarted after creating a volume in slot 3 or 4. This is using one of the drives that I cleaned completely via Gparted.
    It might be worth noting that I don't get any volume name of the volume that is marked as crashed in the Internal Storage->Volumes page in the web UI.
    Is there any way of accessing the "deeper" OS of the NAS (other than the web UI)? Just throwing out suggestions as I'm not really sure what I could do here.
  • Space_Cake
    Space_Cake Posts: 4  Freshman Member
    Deep down I found some python files which are called from the Web UI..
    Unfortunately cant see correct parameters, on the other hand, for your/my problem, without some direct Zyxel support, is going to be hard to find a good soul.

    If you have access to the UI, try to maybe re flash it with newer firmware?
  • aranel42
    aranel42 Posts: 10  Freshman Member
    I'm already on the latest firmware (V5.21(AATB.2)) unfortuantely.
  • Mijzelf
    Mijzelf Posts: 2,600  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Can you post the output of
    <div>cat /proc/mdstat</div><div><br></div><div>cat /proc/partitions</div><div></div>
    when the web UI shows a volume down?
  • aranel42
    aranel42 Posts: 10  Freshman Member
    ~ $ cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]<br>md4 : active raid1 sdc3[0]<br>      240067392 blocks super 1.2 [1/1] [U]<br><br>md3 : active raid1 sdb3[0]<br>      3902886720 blocks super 1.2 [1/1] [U]<br><br>md2 : active raid1 sda3[0]<br>      3902886720 blocks super 1.2 [1/1] [U]<br><br>md1 : active raid1 sda2[0] sdb2[1] sdc2[4]<br>      1998784 blocks super 1.2 [4/3] [UU_U]<br><br>md0 : active raid1 sda1[0] sdb1[1] sdc1[4]<br>      1997760 blocks super 1.2 [4/3] [UU_U]<br><br>unused devices: <none><br>~ $ cat /proc/partitions<br>major minor  #blocks  name<br><br>   7        0     147456 loop0<br>  31        0        256 mtdblock0<br>  31        1        512 mtdblock1<br>  31        2        256 mtdblock2<br>  31        3      10240 mtdblock3<br>  31        4      10240 mtdblock4<br>  31        5     112640 mtdblock5<br>  31        6      10240 mtdblock6<br>  31        7     112640 mtdblock7<br>  31        8       6144 mtdblock8<br>   8        0 3907018584 sda<br>   8        1    1998848 sda1<br>   8        2    1999872 sda2<br>   8        3 3903017984 sda3<br>   8       16 3907018584 sdb<br>   8       17    1998848 sdb1<br>   8       18    1999872 sdb2<br>   8       19 3903017984 sdb3<br>   8       32  244198584 sdc<br>   8       33    1998848 sdc1<br>   8       34    1999872 sdc2<br>   8       35  240198656 sdc3<br>   8       48  244198584 sdd<br>   8       49    1998848 sdd1<br>   8       50    1999872 sdd2<br>  31        9     102424 mtdblock9<br>   9        0    1997760 md0<br>   9        1    1998784 md1<br>  31       10       4464 mtdblock10<br>   9        2 3902886720 md2<br>   9        3 3902886720 md3<br>   9        4  240067392 md4<br><br>
    Here you go. This is with 2x 4TB drives with a basic volume each in slot 1 and 2, 1x 250GB drive with a basic volume in slot 3 that gives the error, and 1x 250GB drive in slot 4 with no volume.

Consumer Product Help Center