Nas542

Hi, this is my first post and im not very familiar with posting or viewing forusm,

Ive recently tried to upgrade firmware on my Nas542 which seems to have failed, i rebooted the nas but i cant get any connection to it, so i decided to factory reset it by holding a pin at the back of it for a couple of seconds, 1 beep, 2 beep, 3 beep then no more beep came and i released it, still no go after several minutes after this, ive read somewhere that you can create a flash stick of some sort, and now im turning to you in order to get some help on this, what can i try to solve this issue? thankful for all answers.

«1

All Replies

  • flinkk
    flinkk Posts: 4
    First Comment Friend Collector

    Id like to update on the matter, ive now noticed that my nas542 starts up but its inaccessable with harddrives inserted to it, when i shut it down and remove all 4 hdds and boots up, i can access it again and it responds to pings, but once shut it down and insert back 4 hdds into it and boots up, it cant be pinged, only getting destination host unreachable, Have now tried factory reset on it, and done test 1 again, remove all hdds, boot up, works fine, can access it and all, but test 2, shut down, boot up, not reachable again, no ping. any ideas someone?

  • Mijzelf
    Mijzelf Posts: 2,788  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    That could be caused by a power supply which cannot provide the juice for the disks anymory. The major part of the peak current is needed to spin up the disks. If during the spin up the voltage drops below a certain level, the CPU stalls.

    And unfortunately, a power supply is subject to wear. Mainly because of the capacitors.

    It is also possible that one of the disks died, and now draws more current than it should.

  • flinkk
    flinkk Posts: 4
    First Comment Friend Collector

    Ive connected one of the drives to my other computer which i ran smart test which results in healthy and fine, and connecting just one drive into the NAS ends up the same result in no reaching it either, i also connected a complete different HDD to the NAS single and it was able to recognize it, is it possible that the raid has been corrupt or broken and triggering this issue also?

  • Mijzelf
    Mijzelf Posts: 2,788  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    Not by design. If the box fails to assemble the raid arrays, or fails to mount the contained filesystems, it will boot as if there are no disks. But maybe some rare corruption can stall the assembling or mounting process.

    and connecting just one drive into the NAS ends up the same result in no reaching it either

    Is that the disk you tested in you computer?

  • flinkk
    flinkk Posts: 4
    First Comment Friend Collector

    Status update: ive solved this abnoxious issue by formatting the hdds, and upgraded to another NAS brand, i tried to search for a raid recovery option tool but alas no success, as all tools costed money for licence. @Mijzelf I thank you for your response and effort posting here but this is the end of road for me and my zyxel journey.

  • VL4DST3R
    VL4DST3R Posts: 9
    First Comment Friend Collector

    Hi, i have encountered the exact same issue after upgrading to the latest firmware V5.21(ABAG.13) on my Nas542. The device does not boot at all unless all drives are removed. All drives are otherwise working fine and did so up to the firmware update. If i connect them to the NAS after it booted they get recognized inside the web ui but no volumes or disk groups are recognized, effectively rendering them useless. This is clearly a software issue from the last security update, and I need to restore functionality urgently. What to do?

  • Mijzelf
    Mijzelf Posts: 2,788  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    What do you mean by 'doesn't boot at all'? The leds don't go on, or only short, or…

    If it really doesn't boot at all, the firmware has nothing to do with it, as it is only accessed in the 3rd boot stage.

    In this thread someone had problems with the WebPublisher package, which after a firmware update crashed the webinterface. As the package is installed on the disk, the webinterface was reachable when the disks were out. And then the package could be disabled. But that box clearly booted.

  • VL4DST3R
    VL4DST3R Posts: 9
    First Comment Friend Collector

    Thanks for the reply. I don't even use WebPublisher, so unless I'm missing something, that shouldn't interfere with anything, I don't think.

    The LEDs do come on, indicating activity and however many drives are inserted into the bays, but you never hear the beep that indicates a successful bootup with them connected, at which point the NAS would actually comes online and become accessible via the web interface or other means.

    Booting with no drives connected works just fine and I can access its interface.

  • VL4DST3R
    VL4DST3R Posts: 9
    First Comment Friend Collector

    To provide an update:

    I've been talking and debugging this with someone from support on the side and their theory so far is that one of the drives somehow kicked the bucket and is causing this (which yes, apparently can happen, more about this below). However I still find this very odd for a number of reasons, the main one being the uncanny proximity of my issue to the original author's and it also involving a firmware update.

    So, the NAS can die if a drive fails to initialize in some way? Apparently yes.

    From talking with support I've learned that that if you apparently have any kind of drive failure, it will(?)/can cause the boot sequence to fail entirely.

    Failure to boot due to a bad drive was news to me, given that the NAS claims it can even detect degraded volumes and offer warnings and options to rebuild the array where possible, but support mentioned that this is caused by an apparently "known limitation" with this NAS series. To quote his answer:

    "This is a known limitation of the NAS series, unfortunately. While it could be implemented, current design does not account for this, once the drive ceases to respond, the boot-up process would hang. Unfortunately, the NAS series is currently EOL, and there are no plans to improve this behaviour."

    So I guess this covers the boot failure… kinda… however this raises another issue: Even if we assume this to be the case and indeed, one of my drives did die in a manner that only this reboot revealed - it doesn't explain why if I remove the affected drive leave the other drive I have in my NAS, the issue is still present.

    Am I to believe all my drives died at once?

  • Mijzelf
    Mijzelf Posts: 2,788  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    Am I to believe all my drives died at once?

    Could be (if the power supply or a DC/DC converter acts up), but unlikely. But it can be tested. Remove the disks, boot up the NAS, enable the ssh server, and login over ssh. Then plug back the disks. (The hardware supports hotplugging, but I would not plug them in all at once, keep a few seconds in between.)

    When all disks are plugged in, assemble & mount the array:

    su
    mdadm -A /dev/md0 /dev/sd[abcd]3
    mkdir -p /mnt/mountpoint
    mount /dev/md0 /mnt/mountpoint
    

    Now you should be able to see your shares in /mnt/mountpoint:

    ls /mnt/mountpoint/
    

    And have a look in the kernel log if any hardware errors occurred:

    dmesg
    

Consumer Product Help Center