Reflash NAS540 via SD card slot? (Recover from probable Hack)

124»

All Replies

  • Clay_JF2019
    Clay_JF2019 Posts: 25  Freshman Member
    edited March 2019
    Beeper went off right away. And it got plenty hot. Not quite hot enough to take skin on contact. But hot enough to require instant finger removal -- probably in excess of 70C.  Not a good long term operating temperature for most CPUs.

    Possible that design changed a little from when your NAS540 was made. But apparently good reason for mine to use whole inner metal case as heatsink (tabs bent upwards on it make spring-like contact with 2 chips and battery).
  • MinoS
    MinoS Posts: 6  Freshman Member
    Hi @Clay_JF2019 ,

    Any news with your issue ? ) Did you managed to watch boot log with serial ?
    I have issue with NAS520 boot-loop, so maybe it is related )
  • Clay_JF2019
    Clay_JF2019 Posts: 25  Freshman Member
    edited March 2019 Answer ✓
    Finally got control of my NAS. 

    The answer? I removed my data drives and replaced them with an uninitialized drive. If ever used before, the drive must be zeroed out -- by, for instance, "diskpart" & "clean all" at  Windows command shell. With an uninitialized drive present (and only uninitialized drives),  I could finally re-flash firmware.  Most importantly with only an UNinitialized disk onboard flashing also wiped out the configuration stored in firmware. NAS returned to factory reset condition. Not a full and ideal solution for reasons as detailed below.

    Turns out that a big part of the problem was that the reset button was disabled by firmware configuration or firmware changes. Additionally new firmware would not flash either when no disk was detected or when my old data set drives were inserted.   

    While hooked to just the motherboard by serial cable all I saw was an endless reboot circle - even with SD card for flash inserted. The boot process would proceed apparently normally for around a minute (did not time it) but then at certain point it would get failures on missing file system mount points and then very quickly start shutdown and restart. No real chance to get control of command line in controlled manner. 

    Apparently initialized drives were hooked into the boot process even under SD firmware flash circumstances either (1) allowing a hack to diverting flash attempts or (2) the legitimate flash process makes heavy assumptions about initialized drives being available for backup of configuration and old software.  In fact from the wipe out of configuration after a flash, I bet that configuration is normally restored automatically from disk AFTER legitimate flashing.

    Also learned that most the normal boot configuration data (which apps etc are to run etc) and other easily hackable aspects are stored on the first data volume in normally invisible directories starting with dot (e.g. .system .admin   etc).  Yes as expected there is a CRONTAB file and its on the data volume.  The system volume (3 Linux RAID5 volumes: md0 is swap, md1 is system, md2 is your data volume) contains a loop mount root file system but its content is apparently fixed for each firmware update.  That root file system is apparently complete with all the app packages in place. Its the system mount points actually stored on the data volume that make the NAS system "live" and configurable.  At least as near as I can tell without learning how to read on the code in its entire-ity. 

    However, in my case simply deleting the system volume contents and the system mount point directories from the data volume did not solve the issue. The system was still way too slow to access and I could not reflash or wipe old basic network-password configuration. Occasionally the login did work and web interface start to appear before web browser timed out (many minutes later). I am fairly sure the compressed root file on system volume and mount points were not recreated during failed reflash attempts.

    Obviously anyone else with a similar issue MIGHT want to skip messing with the drive set completely. Once you have control of your NAS you could try reinstalling your drive set "as is" then select the last option in the "first volume creation" wizard instead of ever completing it with uninitialized drives. The last option seems to say mount existing drives.  There is always a chance that all problems were actually only in firmware. Also some parts of system software on disk might get "updated" (i.e. effectively repaired) by what the NAS might see as a normal system upgrade. At worst you end up reflashing with uninitialized drive again...unless you see something that says its going to wipe existing drives (abort if that seems possible).

    CONCLUSION:
    I was however able to easily mount the old RAID5 data volumes on a Linux file server and the data is fine. I am instead placing a cheap set of uninitialized (4) 2TB drives in the NAS540 to start over. I will periodically backup data to older set of 4x3TB drives when that Linux server is online.

    I have decided to leave the older RAID5 volume on a Linux server for a multitude of reasons. But basically because the only way I am sure to get the drives online with the NAS540 operating correctly seems to be to wipe them out first. That initialized RAID5 volume set probably still won't mount and let the NAS boot correctly even after the reflash. Its looking like the stuff I deleted will not be recreated automatically Z (e.g. system volume file with full root file sytsem and volume point directories on data volume).  I am not sure the the worm does not lurk in the boot cylinder. I see nothing to suggest the NAS540 does ANY self-healing/restoration of bad system info or file systems and volumes between flashes - other than the low level RAID5 mechanism. Finally I note that the increase in network access speed to my data was quite embarrassing to the NAS540 - only ameliorated by the large increase in power usage and physical enclosure size. The NAS540 definitely still has its place as constantly online data storage with cloud services - but apparently its wise to have some backup.

    SUGGESTION: 
    Recovery could be improved if ZyXEL ensured system could boot when only user data was intact on disks.  Then provided a way to trigger a complete system restore that left user data alone but would rewriting all basic system files and disk sectors on on disk volumes.  This would allow system healing regardless of cause without data loss or needing a complete user data restore from backup. Small change to unintialized-first volume creation wizard I would think.

    If this is already possible...its not explicitly clear that is what would happen. There is a wizard entry that at least implies a perfectly intact disk set could be remounted.
  • Clay_JF2019
    Clay_JF2019 Posts: 25  Freshman Member
    Yeah I am really chicken about trying to write zeroes to all areas of disks except data volume and then creating a partition table to define the data volume partition. That would assure any hack or random error in system areas would not exist to cause problems again.  Its also really easy to make a critical error in sectors.  Plus of course I am beginning to be pretty that ANY partition table or other initialization stops the NAS540 from accepting drives. 

    The reading material survey I have done so far seems to say that "start from scratch and restore any data from backups" is a pretty common NAS philosophy. And at some point it certainly is the philosophy that must rule. There are simply too many possible failure scenarios. 

    Still it would be nice if NAS vendors provided some data recovery scenarios that allowed mounting intact and properly identified data partitions then restoring corrupted system areas around that if possible. Then I would be more committed to using only NAS for network storage.

    Anyways I will soon have that backup storage. It just will not be another NAS but instead a Linux SAMBA server using my old RAID5 data drives and ignoring the tiny NAS system and swap partitions.
  • Clay_JF2019
    Clay_JF2019 Posts: 25  Freshman Member
    Obviously there is some secret "code" stamped in firmware and/or on drives that makes NAS540 accept a set of drives as its own as well as the necessity of intact system partitions and mount points on data volume. 

    If any ZyXEL NAS expert knows a simple way to trigger these processes to remount my drives and restore the system stuff to the drives without wiping the entire drives...it would be great to know. But my impressions are that doing so is probably a lot more than 20 lines of terminal entry and probably requires a lot of reading system scripts to steal system code and modify it as necessary. Tell me its  2-4 commands that are hard to enter disastrously  and I would be very happy.
  • zelgit
    zelgit Posts: 9  Freshman Member
    Friend Collector Third Anniversary First Comment

    Hi, yes I'm very late to this thread I know, but I actually have had problems with my NAS326 that I couldn't solve (I broke it after some hacking), but I have successfully restored the firmware on my nas thanks to the comment by Clay_JF2019's that one needs to have an uninitialized drive for this to work and thanks to Mijzelf's rescue sticks and their Readme (even though they could be a bit more detailed :)).
    - I used this link for the rescue stick: https://zyxel.diskstation.eu/Users/Mijzelf/RescueSticks/

    However the zip for NAS326 does not contain an actual firmware as the zips for the other nases (not sure why) but I did find it in one of Mijzelf's other directories: https://zyxel.diskstation.eu/Users/Mijzelf/Firmware/NAS326/

    Again, a big big thank you to Mijzelf for creating and/or hosting these rescue sticks and also to Clay_JF2019's crucial finding that the HDD drive need to be uninitialized (unallocated in Windows), because I did try same steps without it and it didn't work.

Consumer Product Help Center