NAS326 CPU fully loaded with io

Options

Hi all,

again, I got this annoying problem, that the CPU of my NAS326 is at 100% load. At the moment I even can't access the web interface. And certainly I can't access my data via smb.

When connecting with Putty (ssh) and running top, I can see it's not the old problem with python, but the CPU is loaded with io.

Mem: 498072K used, 12656K free, 0K shrd, 47832K buff, 282380K cached
CPU: 0.1% usr 1.7% sys 0.0% nic 0.0% idle 98.0% io 0.0% irq 0.0% sirq
Load average: 16.44 15.69 12.50 1/134 6581
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
6423 2 root SW 0 0.0 0 1.5 [kworker/u2:0]
3703 1 root D N 25000 4.8 0 0.0 python /usr/local/fileye/fileye.pyc
3408 1 root S 17656 3.4 0 0.0 python /usr/local/apache/web_framework/job_queue_daemon.pyc
2985 1 root S N 17236 3.3 0 0.0 /usr/sbin/nmbd -D
3827 3822 nobody D N 11544 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
3828 3822 nobody D N 11544 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
3838 3822 nobody D N 11544 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
3863 3822 nobody D N 11544 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
4056 3822 nobody D N 11544 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
4084 3822 nobody D N 11544 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
2623 2588 nobody S 11456 2.2 0 0.0 /i-data/.system/zy-pkgs/pkg_httpd -f /etc/pkg_service_conf/httpd2.conf
2624 2588 nobody S 11456 2.2 0 0.0 /i-data/.system/zy-pkgs/pkg_httpd -f /etc/pkg_service_conf/httpd2.conf
6333 3822 nobody S N 11412 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
6334 3822 nobody S N 11412 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
6335 3822 nobody S N 11412 2.2 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
3822 1 root S N 9892 1.9 0 0.0 /usr/sbin/httpd -f /etc/service_conf/httpd.conf
2588 1 root S 9888 1.9 0 0.0 /i-data/.system/zy-pkgs/pkg_httpd -f /etc/pkg_service_conf/httpd2.conf
4027 1 root D 9016 1.7 0 0.0 /i-data/affca256/.PKG/myZyXELcloud-Agent/bin/zyxel_xmpp_client
6170 6169 root D N 6424 1.2 0 0.0 python /usr/local/apache/web_framework/main_wsgi.pyc
2043 1 root D 6336 1.2 0 0.0 /usr/bin/python /usr/local/apache/web_framework/lib/zylist.pyc

I think this happens since the last firmware update.

Any hint, what I can do to save my data? Actually I'm transferring with mv from internal drive to an external drive, but it's unbelievable slow! 🙈

Thanks and kind regards,
Sebastian

Best Answers

  • Mijzelf
    Mijzelf Posts: 2,635  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Options

    I'm afraid your harddisk is dying. When you run 'dmesg' I expect you to see lots of I/O errors.

  • Schneefalke
    Schneefalke Posts: 6
    First Comment
    Answer ✓
    Options

    And finally it works. 😁

    With only the good disk remaining, the NAS booted up, beeping as crazy, login possible, telling me, the RAID is degraded, but I'm able to transfer the data with acceptable speed. 😉

    Thanks again and have a good night! 👋🏻😊

All Replies

  • Mijzelf
    Mijzelf Posts: 2,635  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Options

    I'm afraid your harddisk is dying. When you run 'dmesg' I expect you to see lots of I/O errors.

  • Schneefalke
    Options

    Thanks for your answer.

    Oh, you're right. That shows a lot of corrected read errors.

    So I wonder, why the NAS told me, that both HDDs are in good condition, last time I had access to the GUI, just several days ago. 😝

    Maybe it could help, removing the bad disk? I'm running RAID1.

    But isn't that a bug, that the whole system goes nearly down, when one disk is dying?

  • Schneefalke
    Options

    Drive 1 tested with smartctl: PASSED 🤔

    Now I need to find the name, used for the second disk, but all commands regarding the drives need loooong time to run! 😛

  • Schneefalke
    Options

    OK, HDD 2:

    But:

    So, I guess, I found the bad boy… 😜

    I think I'll try:

    1. Removing the bad disk and hope the system will run smooth again. So I could transfer all files to the new system.
    2. Booting up a linux system and connecting the good drive directly to the PC, to get the data.

    Any other better ways? 😋

  • Mijzelf
    Mijzelf Posts: 2,635  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options

    Maybe it could help, removing the bad disk? I'm running RAID1.

    Yes.

    Now I need to find the name, used for the second disk, but all commands regarding the drives need loooong time to run!

    Unless you have an USB disk inserted, the first disk is /dev/sda, the second /dev/sdb. When you had an USB disk inserted on boot, that could be /dev/sda, and the rest just shifts to /dev/sdb and /dev/sdc

  • Mijzelf
    Mijzelf Posts: 2,635  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options

    Any other better ways?

    The canonical way to handle bad disks in a redundant raid array, is to replace them, and let the array manager restore the redundancy.

    Of course it's not a bad idea to have a backup. You should always have a backup. Raid is not a backup.

  • Schneefalke
    Options

    OK, thank you for your very helpful thoughts and tips! 😊👍🏻

    Several years ago, I already changed a bad disk of this NAS326, which he reported correctly. But this time I'll transfer the data onto my new NAS, that's why I don't need to rebuild the RAID in this old one.

    But I'll put the remaining good disk into an USB case and use it as an additional backup drive. In fact, the NAS is partly a backup for data on my PC, but other (less important) files on it need to be "backuped". 😉

  • Schneefalke
    Schneefalke Posts: 6
    First Comment
    Answer ✓
    Options

    And finally it works. 😁

    With only the good disk remaining, the NAS booted up, beeping as crazy, login possible, telling me, the RAID is degraded, but I'm able to transfer the data with acceptable speed. 😉

    Thanks again and have a good night! 👋🏻😊

Consumer Product Help Center