NAS320 unable to upgrade to larger hdd size cannot init appliets: [Errorno 2] No such file or direct

NAS320 unable to upgrade to larger hdd size cannot init appliets: [Errorno 2] No such file or directory: '/i-data/md0/.system'

I'm in desperate need for someone with cli experience.
I am the owner of a NAS320 (latest version of firmware 4.81) who's been working fine for few good years (first version) with 2 x identical 2TB WD hdds configured in RAID1. I've now bought 2 x 16TB Seagate ST16000NM001G-2K Name: SN03.
This is what I did:
Shutdown the NAS, then Took disk 1 of 2TB out of left tray, inserted one of the 16TB into left tray, powered back the NAS, the Volume was degraded, Auto repair kicked in. It took about 8h to complete. Once in healthy, I removed the second 2TB disk from right tray and replaced with the second 16TB Segate hdd using the same process (shuting down the NAS first and then power up).
Everytime, I booted, the Internal Volume is always in Down state and the only error I see in the logs is: no such file or directory: '/i-data/md0/.system'

For the second 16TB hdd, I've done something prior inserting it: I've connected it to a USB external case to an win 10 machine and have created an NTFS partition after I've set its booting sector as GPT (not MBR) as I've used it to create a backup of the 2TB. However, before plugging it into the NAS, I've removed the NTFS partition completely. Obviously, the booting sector was still GPT.

I'm in a very difficult situation as I've managed to get both 16TB hdd in healthy state with all the information on them after various combinations of 2TB in the left tray and triggering repairing, however, there was no option to extend from 2TB to 16TB. I've just triggered an controlled reboot to confirm Volume was still back to Health when booting and try again to expand. However, after first reboot, Volume back to Down and the same error was given. I'm out of ideas.
Can anyone confirm, if I delete the volume completely while both 2x16TB are connected (- as I want to first check if I can create a volume with full disk capacity of 16TB with the new hdd and format them from scratch to see if all is working fine with these 2 new hdds), if I plug back my previous 2TB hdd, will I still be able to access all my data? I do Not have a 5th external 2TB disk to copy all my data and I cannot risk to loos it.

I need help, please.

Thank you so much in advanced.

«13

All Replies

  • Update: I've connected both 16TB hdd to an external hdd enclosure and accessed their partitions using Linux OS on another laptop. What I found: one had its partition table MBR - the one that was formatted by the NAS OS while the one I had previously used as a backup was GTP. I don't know which one was the one which I was successful in repairing and then booting up with. I've made both GTP now since that's recommended partition table for disks with capacity over 2TB and I'm trying the whole process again after I've removed all partitions created by the NAS OS during previous 3 volume repairs attempts.
  • The wording in the logs is so confusing: Once you've started the repair process the logs in the system logs state: Start repairing degrading raid by disk2 - it is confusing as I'm confident it is to disk2 from disk1. It is not helping to understand which disk is actually brought to meet the parity. I've done a format on the 16T to make sure I still keep one of the 2TB disks where the data was good intact. The log phrasing would've been clearer stating 'Start repairing degrading raid to Disk2'.
  • Mijzelf
    Mijzelf Posts: 2,786  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    When you have shell access to that NAS, you can check if it is using a GPT. Simply execute

    fdisk -l /dev/sd*

    When fdisk tells it's GPT, or when it contains a partition of type 'ee' , it's GPT. In the other case it's MBR, and in that case the volume can't be enlarged.
  • Mijzelf said:
    When you have shell access to that NAS, you can check if it is using a GPT. Simply execute

    fdisk -l /dev/sd*

    When fdisk tells it's GPT, or when it contains a partition of type 'ee' , it's GPT. In the other case it's MBR, and in that case the volume can't be enlarged.

    Thank you, Mijzelf. I was hoping you'd see my kind request for help. I've tried that and I get cannot open sda. I then tried parted -l and partition is unknown. It might be locked due to repair started?

    I have a feeling it will be MBR. I will try once the repair completes. I'm affraid, I will need to copy all data on a third storage, recreate the entire volume on the new 2x16TB and re-copy it.
    That is a shame, I could not have used the disk by disk method. The manual does Not mention any of these limitations or things to be aware of. It's sad as GPT was first released in the 90s and while the NAS 320 was developed years after that, however, it seems future was not something that was taken into account when the R&D team architected the product.
  • Mijzelf
    Mijzelf Posts: 2,786  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    edited September 2021

    I've tried that and I get cannot open sda.

    You have to login as root, using the admin password, not admin.
    It might be locked due to repair started?
    No. The partition table is not part of the raid array.
    it seems future was not something that was taken into account when the R&D team architected the product.
    It's hard to anticipate on future developments. I think the 320 hit the market in 2009, or something like that. At the end of 2009 the first 2TB disks arose, but these were far to expensive for the 320 users. I don't know if at that time it was already clear that GPT would be the thing for mass storage. It existed, but there was also a way to partition bigger disks using MBR. MBR is not limited to 2TiB, it is limited to 2^32 sectors. The default sector size is 512 bytes, hence the 2TiB limit. Most filesystems use a 4KiB block size, which means that you could create harddisks with 4KiB sectors with hardly any drawbacks. Using a 4KiB sector size MBR supports up to 16TiB.
    In the end of 2010 the first 'Advanced Format' disks (4KiB physical sector size, for down compatibility 512 bytes logical sector size) hit the market. It's not hard to imagine these disk could have had a jumper to expose the internal 4KiB sectors, which would have stretched the life of MBR.
    Anyway, the Linux kernel on the 320 supported both ways for bigger disks, but the firmware didn't support disks >2TB. A later firmware update added the tools (parted+scripts) to create a GPT partition table. In a parallel universe that was not needed, because fdisk doesn't care about sector size.
    BTW, I *think* the firmware in the parallel universe also needed an update, as I don't think it's possible to mix different sector sizes in a raid array.

    /Edit: It is not needed to use a 3th disk to copy the data. You can pull the 2TB disks, and create an array on the 2 new disks. Then pull one of the new disks, and insert an old one. Don't know if the firmware will build both volumes degraded, but at least it can be done manually, and then you can copy the data over in a shell. Then pull the 2TB disk, and reinsert the 2nd new disk, and resync the array.
  • pianist4Him
    pianist4Him Posts: 18
    edited September 2021
    Hi Mijzelf,
    Thank you for responding. I guess you nailed it, indeed the partition of the source hdd the 2TB one (disk1) is MBR while the destination - what I'm repairing the raid1 to (disk2) is GPT.

    Model:  WD20EARX-00PASB0 (scsi)
    Disk /dev/sda: 2000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos

    Number  Start   End     Size    Type     File system  Flags
     1      32.3kB  526MB   526MB   primary  ext2
     2      526MB   2000GB  2000GB  primary  ext4


    Model: Seagate ST16000NM001G-2K (scsi)
    Disk /dev/sdb: 16.0TB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt

    Number  Start   End     Size    File system     Name       Flags
     1      1049kB  512MB   511MB   linux-swap(v1)  mitraswap
     2      512MB   16.0TB  16.0TB  ext4            eexxtt44

    Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x2b2649e9

       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1               1          64      514048+   8  AIX
    /dev/sda2              65      243201  1952997952+  20  Unknown

    Disk /dev/sda1: 526 MB, 526385664 bytes
    255 heads, 63 sectors/track, 63 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000

    Disk /dev/sda1 doesn't contain a valid partition table

    Disk /dev/sda2: 1999.8 GB, 1999869903360 bytes
    255 heads, 63 sectors/track, 243137 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000

    Disk /dev/sda2 doesn't contain a valid partition table

    WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


    Disk /dev/sdb: 16000.9 GB, 16000900661248 bytes
    255 heads, 63 sectors/track, 1945332 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000

       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1               1      267350  2147483647+  ee  GPT

    Disk /dev/sdb1: 510 MB, 510656512 bytes
    255 heads, 63 sectors/track, 62 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000

    Disk /dev/sdb1 doesn't contain a valid partition table

    Disk /dev/sdb2: 16000.3 GB, 16000387907584 bytes
    255 heads, 63 sectors/track, 1945269 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000

    Disk /dev/sdb2 doesn't contain a valid partition table


    What is your recommendation, I believe waiting for this repair to complete, which takes another 7h, then to insert the second 16TB Seagate drive and wait another 8h, when the repair RAID option is used from a source with MBR as partition table into a GTP one as destination, is this supported or expected to work? I know GTP has a section to simulate the MBR region on the hdd, but personally, minding the amount of time I've spent - 3 days, I believe it is best to copy manually to different hdds the data, then delete all partitions created by the NAS OS when using repair RAID option after I'm reconnecting the 16TB disks to an external enclosure and connect that a linux running machine, then add the black trays to each of the hdd, then connect them to the NAS and recreate a new Volume from scratch while both new 16TB disks use GTP as their partition table but have absolutely no partition for data at all on each. Once both become healthy with 16TB of storage capacity, copy the data manually again from all other smaller hdds.

    Personally, I thought repair raid (disk by disk) method recommended by Zyxel would've been the best one, but it turns out it consumed me 3 days and is not partition table aware. Do you know what happens when you choose repair RAID - is it creating an identical copy of the source disk, including partition table or only creates the linux-swap & linux-raid partitions?




  • Can I please get a confirmation to the latest idea I have left - copying all the data from the 2TB hdd to various hdds in my home, then remove all partitions from the 16TB disks one by one in an external enclosure connected to a laptop running linux but still leave both 2x 16TB disks having a GPT partition table, then install them into the NAS320 and by using GUI delete and recreate a new volume: would the linux OS from the NAS320 automatically detect the partition table of the storage 2 x 16TB disks and apply the scripts for GDP partition types to create a volume that would work flawlessly in RAID1 with the new 2 x 16TB Seagate disks, afterwhich I'd then copy back manually all the data spread accross desktop/laptop hdds to the new Volume created brand-new on the NAS320?
  • Mijzelf
    Mijzelf Posts: 2,786  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Do you know what happens when you choose repair RAID - is it creating an identical copy of the source disk, including partition table or only creates the linux-swap & linux-raid partitions?

    AFAIK the partition table is cloned, and then mdadm does the resyncing on the raw partition. When resyncing is done and there is unpartitioned space on both disks, you get a resize button in the webinterface. I suppose the amount of free space has to exceed some threshold for that. In most cases there are some sectors left.

    Using an external enclosure can be risky, as not all enclosures handle >2TiB nicely. Some only expose 32 bits address space, some expose 4KiB sectors, to be able to use MBR on 16TB hard disks. But as soon as you connect them to a real sata port the partition table is incompatible, of course.

    Anyway, if you offer the NAS empty 16TB disks, it should use GPT. I think it also uses GPT when you create a new volume.

    BTW, if you have a Linux laptop, you don't need to copy the date to 'to various hdds in my home', as you can simply assemble&mount one of the 2TB disks (degraded, of course, but who cares?), and copy over. It is recommended to use a cabled network, don't know if your laptop supports that.



  • Mijzelf said:
    Do you know what happens when you choose repair RAID - is it creating an identical copy of the source disk, including partition table or only creates the linux-swap & linux-raid partitions?

    AFAIK the partition table is cloned, and then mdadm does the resyncing on the raw partition. When resyncing is done and there is unpartitioned space on both disks, you get a resize button in the webinterface. I suppose the amount of free space has to exceed some threshold for that. In most cases there are some sectors left.

    Using an external enclosure can be risky, as not all enclosures handle >2TiB nicely. Some only expose 32 bits address space, some expose 4KiB sectors, to be able to use MBR on 16TB hard disks. But as soon as you connect them to a real sata port the partition table is incompatible, of course.

    Anyway, if you offer the NAS empty 16TB disks, it should use GPT. I think it also uses GPT when you create a new volume.

    BTW, if you have a Linux laptop, you don't need to copy the date to 'to various hdds in my home', as you can simply assemble&mount one of the 2TB disks (degraded, of course, but who cares?), and copy over. It is recommended to use a cabled network, don't know if your laptop supports that.



    Thanks,
    hmm... 'if you offer the NAS empty 16TB disks, it should use GPT'
    I always love the 'should'. Do you think this was tested by anyone in Zyxel dev team or their QA?

    Can you share the details about how to mound one of the the 2TB disk from command line while that is plugged into the NAS, even if degraded and would that work after I remove and delete the volume from GUI?

    How could I remove the table partition from the hdd to bring them back to completely empty and allow linux OS from NAS to create the partition table as well? From what I shared, the second disk the 16TB partition table was created while the disk was mounted in the external enclosure using USB 2.0. One of them was converted from MBR to GPT in Win 10 and one of them was GPT created while attached to the Laptop running Linux Ubuntu.

  • Mijzelf
    Mijzelf Posts: 2,786  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    I always love the 'should'. Do you think this was tested by anyone in Zyxel dev team or their QA?
    Yes and no. I remember that support for >2TB disks was added in a firmware update, and there were some of those disks on the 'known working' list, so they tested that. But of course they never tested 16TB disks, as they didn't exist yet when the 320 went EOS.
    Can you share the details about how to mound one of the the 2TB disk from command line while that is plugged into the NAS, even if degraded and would that work after I remove and delete the volume from GUI?
    If you deleted the volume, I don't think it is still mountable. But else you have to find out the devicename:
    cat /proc/partitions
    This lists all blockdevices (disks, partitions, raidarrays, flash) where a disk is sdx (x being a, b, c, ...) and a partition on that disk is sdxn (n being 1,2,3,...). The size is given in blocks of 1kb. So you are searching for a disk of about 20000000000 blocks.
    Assuming the disk is sda, then the data partition is sda2.
    Assemble the degraded array:
    mdadm --assemble /dev/md1 /dev/sda2 --run
    And mount it:
    mkdir -p /mnt/mountpoint
    mount /dev/md1 /mnt/mountpoint

    Maybe the assembling is not necessary, if /proc/partitions show 2 md devices, from which one is around 2TB, it is already done. 
    How could I remove the table partition from the hdd to bring them back to completely empty and allow linux OS from NAS to create the partition table as well?

    dd if=/dev/zero of=/dev/sda bs=1M count=16

    will overwrite the first 16MiB of the disk sda. This includes the partition table. (MBR or GPT, doesn't matter. Both are in the first 64KiB) After that reboot the NAS, as the decoded partition table is still in memory. When there are disks in this system which should not be wiped, doublecheck in /proc/partitions that the disk specified in 'of=' is the right one. dd will overwrite anything without asking. If you want to wipe the whole disk omit the 'count='. But that will cost days (writing will be ~75MB/sec. Do the math), and is unnecessary.


Consumer Product Help Center