NAS320 unable to upgrade to larger hdd size cannot init appliets: [Errorno 2] No such file or direct

2

Answers

  • Hi Mijself,

    I did what I had in mind last night and so far all looks promising, except the volume is no in resync and it takes ages to complete. It's been over 13h and it is only 39%.

    Here's what I found about re-synchronising:

    Resynchronizing or recovering a RAID 1 volume that was down is done block-by-
    block, so the time it takes depends more on the size of your hard drive(s) than the
    amount of data you have on them.
    Note: Do not restart the NSA while the NSA is resynchronizing or recovering a volume
    as this will cause the synchronization to begin again after the NSA fully reboots.
    Note: You can access data on a RAID volume while it is resynchronizing or
    recovering, but it is not recommended.

    It's interested that it's first time I've seen it doing a re-sync, as before it was only recovering, but please mind the size now is 16TB vs 2TB before when I was doing repairing from the previous hdd.
    Now, I've created a new volume with both 16TB in with no partition, only with GPT partition table created on each 16TB disk while mounted to a linux laptop via an USB 2.0 external enclosure.

    Do you know what is the difference between recovering when you trigger a repair of a downgraded Volume and the resync immediately after you've created a new volume?

    What is best practices to the shares that used to be available to the previous Volume, should I remove them all and create new ones or, if I create folders with the same names, will the old shares work?

    Thanks,

  • Mijzelf
    Mijzelf Posts: 1,785  Guru Member
    Do you know what is the difference between recovering when you trigger a repair of a downgraded Volume and the resync immediately after you've created a new volume?

    What is best practices to the shares that used to be available to the previous Volume, should I remove them all and create new ones or, if I create folders with the same names, will the old shares work?

    Thanks,

    My guess is that recovery is when one of the disks went out of sync, due to being unavailable for some time or due to a bad sector, while resync is creating a new raid member on an empty partition. Or maybe just the other way around, the words seem equivalent in this context.
    Timewise it doesn't matter, as the difference is only the writing of the raid header.

    The shares on your old volume won't work for your new volume. The volume has got a new internal name (some 8 digit hex value), and that is used for the share internally.
  • I need help, although after finishing the resync, the volume is healthy and it shows 14.8TB in size, I do still have the expand option next to it. I've clicked that and I got this error:

    e2fsck 1.41.14 (22-Dec-2010)
    Pass 1: Checking inodes, blocks, and sizes
    Error allocating block bitmap (4): Memory allocation failed
    e2fsck: aborted
    e2fsck -f -y return value:8

    What is the reason for this?
    when would e2fsck -f -y return value 8?
  • Although both 16TB hdd are smart and I used to see their SMART info, I only see Volume and not the SMART information underneath - any reasons why that is no longer available?

  • Mijzelf
    Mijzelf Posts: 1,785  Guru Member
    edited September 13
    I need help, although after finishing the resync, the volume is healthy and it shows 14.8TB in size, I do still have the expand option next to it. I've clicked that and I got this error:

    e2fsck 1.41.14 (22-Dec-2010)
    Pass 1: Checking inodes, blocks, and sizes
    Error allocating block bitmap (4): Memory allocation failed
    e2fsck: aborted
    e2fsck -f -y return value:8

    What is the reason for this?
    when would e2fsck -f -y return value 8?
    Return value 8 is Operational Error. Seeing the error 'Memory allocation failed', I think you can safely assume that a 320 doesn't have enough memory to fsck a 14.8TB filesystem.
    About the 14.8TB, I first thought that were actually TiB's, but a quick calculation shows that cannot be true: (16*10^12)/2^40=14.55TiB.
    Looking at /proc/partitions you can see if and where the lacking space is. You have the disks sda, the partition sda2, and the raid array md0. md0 should be slightly smaller than sda2 (header size), and sda2 should be about 500MB smaller than sda. (The size of sda1)
    But it is also possible that it's a firmware calculation error (just like the expand option). The firmware isn't designed nor tested for this size of disks. Although it a good explanation for the expand option, for the actual size I think the difference to expected is too small to be a calculation error. But you never know.

    Although both 16TB hdd are smart and I used to see their SMART info, I only see Volume and not the SMART information underneath - any reasons why that is no longer available?
    Don't know. But you can run smartctl manually to see if it outputs something strange.
    smartctl -a /dev/sda

    Maybe a full path is required:
    /i-data/md0/.system/zy-pkgs/bin/smartctl -a /dev/sda
    or
    /i-data/md0/.system/zy-pkgs/sbin/smartctl -a /dev/sda




  • Thank you so much again great info.
    When you hover your mouse over the disks it says: Capacity 14.55 TB, if I were to take the capital B as Bytes instead of bites, the it matches your calculcation. Can you please expand what the: (16*10^12)/2^40 represented? Alright, it looks better but still, I cannot work out why it is given me the Expand option if I can use the full capacity already, probably, because smart fails to read the full physical capacity of the drive?

    I've checked and there's no utility at all, I guess, I will have to install all utilities again.
    Wired, when I removed the volume, I've lost all the Utility packages. I will reconnect it to Internet and re-install them again. Hopefully that would fix the SMART as well?

    ~ # smartctl -a /dev/sda
    -sh: smartctl: not found
    ~ # /i-data/md0/.system/zy-pkgs/bin/smartctl -a /dev/sda
    -sh: /i-data/md0/.system/zy-pkgs/bin/smartctl: not found
    ~ # cd /i-data/md0/.system/zy-pkgs/bin/
    /i-data/fbf6da57/.system/zy-pkgs/bin # ls -lart
    drwxrwxrwx    9 root     root          4096 Sep 11 23:12 ..
    drwxrwxrwx    2 root     root          4096 Sep 11 23:12 .



  • I've installed the packages again and SMART is back and working as expected, however, I still have the Expand option next to the Volume. Other than that, all is working as expected.
    In regards to memory, many OS, do use swap - hdd as an extension of memory, when you look at the partitions, a portion of the disks is used for swap while the rest is for the RAID, why wouldn't the swap file be used to execute the Scan and Expanding - therefore, even if the repair would've worked, I couldn't have expanded - I've burned 3 days trying all possible combinations with no success... :(

  • pianist4Him
    pianist4Him Posts: 18
    edited September 13
    Do you know what protocol and service does myZyxel cloud agent use?
  • Mijzelf
    Mijzelf Posts: 1,785  Guru Member
    I still have the Expand option next to the Volume. Other than that, all is working as expected.
    Did you already check in /proc/partitions if there is actually room for expanding?
    In regards to memory, many OS, do use swap - hdd as an extension of memory, when you look at the partitions, a portion of the disks is used for swap while the rest is for the RAID, why wouldn't the swap file be used to execute the Scan and Expanding
    The NAS does use swap, you can see that with 'cat /proc/swaps'. But not all types of memory usage can be swapped away. Swap is incredibly slow for random access. The access time for a byte in memory is measured in nanoseconds. The access time for a byte in swap is something like 10milliseconds. That is a million times slower. Swap is fine to swap away memory that you won't need for a long time, but if you need to access it all the time it's not usable. Would you be happy if e2fsck would allow swap to be used for it's memory need, resulting in a scan time of a month?
    Can you please expand what the: (16*10^12)/2^40 represented?
    That's about the difference between TB and TiB. People are used to metric prefixes (kilo, mega, giga, milli, micro, ...) which are powers of 10. For computers that is less obvious as they use the binary numeral system, which makes powers of 2 more usable.
    A long time ago, when computers were new and mysterious some smart guy said: '2^10 bytes is 1024 bytes. That is almost equal to 1kB, so let's actually use that prefix for 1024, as long as computers are involved. It's only 2.4% difference after all.' After some time that idea appeared to be not so smart, when the number got bigger. The difference between 2^10 and 10^3 (1024 and 1000), is only 2.4%, but the difference between 2^20 and 10^6 (1048576 and 1000000) is 4.9%. For a disk of 1TB (10^12 bytes, or 1000000000000) the difference is 10%. There has been trials against harddisk manufacturers because they were selling a 100MB harddisk, while it was 'only' 100*1000*1000 bytes in size, instead of 100*1024*1024. The manufacturers won, of course. Meanwhile the binary prefixes have been invented, just to catch that difference. A disk of 1 TB (Terabyte, 10^12 bytes) is not 1 TiB (Tebibyte, 2^40 bytes). Unfortunately the use of the right prefix is not widespread, so if some webinterface mentions 14.55TB, it's not obvious it actually means TB's. It could also be TiB. The constant (10^12)/(2^40) is the conversion ratio.
    (Fun fact, the need to disconnect binary prefixes from decimal prefixes became apparent quite early on. Maybe you are old enough to remember the 1.44MB diskette? Well, that one was not 1.44MB, nor 1.44MiB. it is a double density 720kB diskette, which was exactly twice the size. The 720kB diskette was actually 720KiB, and so when doubled that should have been 1440KiB, which is 1.40MiB or 1.48MB. But marketing said: 2 times 720K should be 1.44M. And that is how the (1.44*1000*1024)B diskette was born.)
    Do you know if keeping away from installing and using myZyxel cloud agent protects from or do you know what protocol would the eng./support at Zyxel use to connect to their NAS using the built in usernames? I want to block that from Internet - is myZyxel cloud using only https while the NAS OS communicates with the backend on a different protocol when checking firmware availability, etc?
    myZyxel cloud uses https. The firmware and package updates uses FTP. I don't know if myZyxel cloud actually gives support access to the nas internals.
    BTW, I hope you are aware that this NAS is EOS for a long time. There will be no firmware updates, nor ZyXEL support.

  • This is a cat to proc/partitions

    major minor  #blocks  name

       7        0     143360 loop0
       8        0 15625879552 sda
       8        1     498688 sda1
       8        2 15625378816 sda2
      31        0       1024 mtdblock0
      31        1        512 mtdblock1
      31        2        512 mtdblock2
      31        3        512 mtdblock3
      31        4      10240 mtdblock4
      31        5      10240 mtdblock5
      31        6      48896 mtdblock6
      31        7      10240 mtdblock7
      31        8      48896 mtdblock8
       8       16 15625879552 sdb
       8       17     498688 sdb1
       8       18 15625378816 sdb2
       9        0 15625377656 md0