NAS542 - HDD standby when LAN is deconnected

Xplorer
Xplorer Posts: 6  Freshman Member
edited January 2019 in Personal Cloud Storage
Hi there,

I have a brand new NAS542 with two 2TB WD red, configurated as Raid1.
Firmware updated to V5.21(ABAG.2)
Zyxel cloud and Media server are switched off.
There is still no data on the harddisk volume (empty).

<Control Panel->Power->Power Management->Turn off hard disks> is set to 10 minutes.

With these settings I get a strange standby behaviour:
LAN1 is connected and on
  • If no device is connected with the disk station, the harddisks turn off after 10 minutes and stay off (good)
LAN1 is off (cable not connected or switch is off)
  • harddisks stay on and never turn off (no standby, not good!!!)  :s

I think there is a bug in the firmware

Can anyone here reproduce this behaviour with this system?
Simpliest way to do so, is to let the disks go into standby while LAN1 is connected. Then deconnect LAN1. After less than a minute the disks will turn on.

Is there a way how to debug, why the disks turn on when LAN1 is deconnected?

Thanks for help

#NAS_Jan_2019
«1

Comments

  • Wiasouda
    Wiasouda Posts: 156  Ally Member
    I test my NAS540, then it is normal.
    I check demesg and see the disk spin down.
    Maybe there are some program running, such as thumbnail?
  • Mijzelf
    Mijzelf Posts: 1,601  Guru Member
    Is there a way how to debug, why the disks turn on when LAN1 is deconnected?

    My Tweaks package has a Disk Monitor, which in some cases can tell which process keeps the disk awake.

  • Xplorer
    Xplorer Posts: 6  Freshman Member
    Thanks, I will try to get more information from the station and see what keeps the disks running.
  • Xplorer
    Xplorer Posts: 6  Freshman Member
    edited January 2019
    Ok, I installed Tweaks package and recorded a log with the disk monitor.

    Start logging (disk go into standby after 10 minutes):
    ==== Sun Jan 27 23:41:25 CET 2019 ====
    [ 2651.740674]
    [ 2651.740676] ****** disk(0:0:0:0) spin down at 235174 ******
    [ 2652.308883]
    [ 2652.308886] ****** disk(1:0:0:0) spin down at 235231 ******

    Then, 30 minutes later, I disconnected the LAN cable from the station and the fallowing processes started to write to the disks repeatly:
    ==== Mon Jan 28 00:19:19 CET 2019 ====
    [ 4925.196646] PHY: comcerto-0:04 - Link is Down
    [ 4925.201034] pfe_eth_adjust_link: PHY: comcerto-0:04, phy->link: 0
    ==== Mon Jan 28 00:21:44 CET 2019 ====
    [ 5070.069600] echo(16435): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5070.069633] echo(16435): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5070.069646] echo(16435): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5076.016546] jbd2/dm-1-8(1624): WRITE block 1703181088 on dm-1 (8 sectors)
    [ 5076.016614] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5076.016661]
    [ 5076.016663] ****** disk(1:0:0:0 0)(HD2) awaked by md2_raid1 (cmd: 35) ******
    [ 5076.025235] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5079.016456]
    [ 5079.016458] ****** disk(0:0:0:0 0)(HD1) awaked by kworker/1:0 (cmd: 35) ******
    [ 5079.381220] sh(16550): READ block 1703149632 on dm-1 (8 sectors)
    [ 5083.687982] jbd2/dm-1-8(1624): WRITE block 1048844224 on dm-1 (8 sectors)
    [ 5083.688033] jbd2/dm-1-8(1624): WRITE block 1048844232 on dm-1 (8 sectors)
    [ 5083.688339] jbd2/dm-1-8(1624): WRITE block 1048844240 on dm-1 (8 sectors)
    [ 5083.946489] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5083.946543] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    ==== Mon Jan 28 00:22:03 CET 2019 ====
    [ 5089.976486] jbd2/dm-1-8(1624): WRITE block 1703181088 on dm-1 (8 sectors)
    [ 5089.976555] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5089.976604] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5089.999054] jbd2/dm-1-8(1624): WRITE block 1048844248 on dm-1 (8 sectors)
    [ 5089.999099] jbd2/dm-1-8(1624): WRITE block 1048844256 on dm-1 (8 sectors)
    [ 5089.999506] jbd2/dm-1-8(1624): WRITE block 1048844264 on dm-1 (8 sectors)
    [ 5090.256478] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5090.256530] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5100.076650] flush-253:1(16501): WRITE block 1703181688 on dm-1 (8 sectors)
    [ 5100.076723] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5100.076772] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5100.306473] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5100.306519] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5103.320335] echo(16940): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5103.320366] echo(16940): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5103.320381] echo(16940): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5106.016507] jbd2/dm-1-8(1624): WRITE block 1703181688 on dm-1 (8 sectors)
    [ 5106.016565] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5106.016615] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5106.042812] jbd2/dm-1-8(1624): WRITE block 1048844272 on dm-1 (8 sectors)
    [ 5106.042844] jbd2/dm-1-8(1624): WRITE block 1048844280 on dm-1 (8 sectors)
    [ 5106.042863] jbd2/dm-1-8(1624): WRITE block 1048844288 on dm-1 (8 sectors)
    [ 5106.042879] jbd2/dm-1-8(1624): WRITE block 1048844296 on dm-1 (8 sectors)
    [ 5106.043397] jbd2/dm-1-8(1624): WRITE block 1048844304 on dm-1 (8 sectors)
    [ 5106.296477] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5106.296523] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5115.096495] flush-253:1(16501): WRITE block 1702887432 on dm-1 (8 sectors)
    [ 5115.096563] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5115.096619] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5115.120176] flush-253:1(16501): WRITE block 1702887768 on dm-1 (8 sectors)
    [ 5115.120231] flush-253:1(16501): WRITE block 408 on dm-1 (8 sectors)
    [ 5115.326493] md2_raid1(1551): WRITE block 8 on sdb3 (1 sectors)
    [ 5115.326549] md2_raid1(1551): WRITE block 8 on sda3 (1 sectors)
    [ 5140.973262] echo(17394): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5140.973296] echo(17394): dirtied inode 53215418 (cloudagent.log) on dm-1
    [ 5140.973310] echo(17394): dirtied inode 53215418 (cloudagent.log) on dm-1
    ......

    so, 2 processes are writing to disk while LAN is disconnected:
    md2_raid1
    jbd2/ dm-1-8

    Any ideas what they do and why they write to the disk during offline state?
  • Mijzelf
    Mijzelf Posts: 1,601  Guru Member
    Actually it's echo. md2_raid is responsible for rw to the raid array members, and jbd2 writes the journal. Both initiated by echo.
    Somewhere on your data partition (md2) there is a logfile cloudagent.log, which gets a line added by echo. Maybe that line gives more info. Maybe that file is in /i-data/sysvol/.system/

  • Xplorer
    Xplorer Posts: 6  Freshman Member
    edited January 2019
    Thanks, I found the cloudagent.log right away in /i-data/xyz/.PKG/myZyXELcloud-Agent/log/...

    The part in the log about the same time as disk log looks like this:
    [2019/01/27-22:58:37] plugin [zyxel-share-1.0] loaded!
    [2019/01/27-22:58:37] plugin [zyxel-led_indicator-1.0] loaded!
    [2019/01/27-22:58:37] plugin [zyxel-package_manager-1.0] loaded!
    [2019/01/27-22:58:37] plugin [zyxel-p2p-0.1] loaded!
    [2019/01/27-22:58:37] UPnP plugin loaded.
    [2019/01/27-22:58:37] plugin [zyxel-upnp-1.0] loaded!
    [2019/01/27-22:58:37] plugin [zyxel-device_information-1.0] loaded!
    [2019/01/27-22:58:37] NAT type detect:5, Hairping:0 [52.5.137.202]
    [2019/01/27-22:58:38] pair status checking: "NON-PAIRED" and no cloud user exists, do nothing
    [2019/01/27-22:58:40] connect to XMPP server successfully!
    [2019/01/28-00:21:43] connection disconnected: Error: 0  Reason: Ping timed out
    [2019/01/28-00:21:43] re-connect delay time: 9254 ms
    [2019/01/28-00:21:57] agent reconnect: restful failed
    [2019/01/28-00:21:57] get xmpp login data failed. wait for retrying...18508 ms
    [2019/01/28-00:22:16] agent reconnect: restful failed
    [2019/01/28-00:22:16] get xmpp login data failed. wait for retrying...37016 ms
    [2019/01/28-00:22:54] agent reconnect: restful failed
    [2019/01/28-00:22:54] get xmpp login data failed. wait for retrying...74032 ms
    [2019/01/28-00:24:09] agent reconnect: restful failed
    [2019/01/28-00:24:09] get xmpp login data failed. wait for retrying...148064 ms
    [2019/01/28-00:26:37] agent reconnect: restful failed
    [2019/01/28-00:26:37] get xmpp login data failed. wait for retrying...296128 ms
    [2019/01/28-00:31:35] RESTFUL: device register success, and http code is  200 
    [2019/01/28-00:31:35] agent reconnect: restful success
    [2019/01/28-00:31:36] sync pair status "NON_PAIRED"
    [2019/01/28-00:31:37] NAT type detect:5, Hairping:0 [52.5.62.135]
    [2019/01/28-00:31:37] pair status checking: "NON-PAIRED" and no cloud user exists, do nothing
    [2019/01/28-00:31:37] re-connecting...
    [2019/01/28-00:31:40] connect to XMPP server successfully!
    [2019/01/28-16:53:48] start cloudagent: restful failed, retrying
    [2019/01/28-16:54:07] monitor: enet process is dead, start it now
    [2019/01/28-16:54:33] RESTFUL: device register success, and http code is  200 
    [2019/01/28-16:54:34] start cloudagent: restful success, start cloud agent
    [2019/01/28-16:54:34] sync pair status "NON_PAIRED"
    At 2019/01/28-00:21 it realizes that LAN connection is off tries to loginto somewhere?

    How can I switch cloudagent off? I guess I don't need it anyway because I won't use Zyxel cloud.
  • Mijzelf
    Mijzelf Posts: 1,601  Guru Member
    At 2019/01/28-00:21 it realizes that LAN connection is off tries to loginto somewhere?

    AFAIK the cloud agent always tries to have a connection to the cloud server. So yes, it tries to reconnect when the connection is gone.

    How can I switch cloudagent off?
    You can't. But you can uninstall it. The firmware will automatically reinstall it, but in the 'Disable unneeded daemons' section of Tweaks you can find a way to block that.

  • Xplorer
    Xplorer Posts: 6  Freshman Member
    Ok, I will try to uninstall the cloud agent. I already found the part in Tweaks, where to disable unneeded deamons.

    Just wonder, that this cloud agent behaviour is only on my station when there is no LAN. I know it's unusual to run the station without LAN connection (when not needet). But it is still a bug in my view.
  • Mijzelf
    Mijzelf Posts: 1,601  Guru Member
    A bug? A NAS without LAN connection is useless, so you could switch it off as well.

    Yet I think it's a mistake to provide a firmware which by default keeps a connection to an Amazon server, without even an option to switch that off.
  • Xplorer
    Xplorer Posts: 6  Freshman Member
    edited February 2019
    Ok, agree a NAS without LAN is useless.

    I have tried to find a way to uninstall the cloudagent, but with SSH the usual linux commands are not working on the stock OS.

    On the net, I found a tutorial how to get OMV running on a external drive (USB-Stick or SSD). It was working, but then I run into other problems like fan control and so on. I am not really fit on linux, so I gave up.

    Back on the stock Zyxel OS, I have installed a small 32GB SSD and installed all packages on this volume. So, the harddrives are only for data. So, now all OS processes run from SSD and the harddisks spin down fine. Even when LAN is off (because cloudagent is writing to SSD).

    Just to be curious, how could I get cloudagent uninstalled on this system?
Sign In to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click on this button!