NAS 326 change disks

gyulav
gyulav Posts: 6  Freshman Member
First Comment Friend Collector

Hello!

I have the Zyxel NAS 326 with two 2TB disks installed in basic mode. The disks are full.

I would like to change two 3TB disks.

How to quickly copy/replace this two disks?

2 TB disk "linux dd command" copy to new 3TB disk? After that how expand the disk?

Or i install the two new disks into the NAS, and and format they and copy old disks to new disks via USB?

Best Answers

  • Mijzelf
    Mijzelf Posts: 2,904  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Answer ✓

    'Basic mode' is without raid1, I suppose? In that case you've got 2 volumes. Then 'dd' is not the way to go. I'd exchange one disk, and create a new volume on the new disk. Then use the filebrowser to copy the data over, or do it from an ssh shell (with cp -a). After you're done, put back the old disk and the empty new one, and repeat.

    Finally put both new disks in. If you had custom shares, you'll have to re-enable them, as they are on another volume now.

  • Mijzelf
    Mijzelf Posts: 2,904  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Answer ✓

    The "config is on HDD/disk, not in NAS?

    The firmware config is on an internal flashdisk. The configuration of the packages is (mainly?) on the disk, next to the packages.

    On second thought, you will loose your packages. And ZyXEL has shut down their package server. Yet it is possible to install them using a backup I made. If you use dd to copy the disk, I think the packages will remain. But it depends on sequence.

    Background: The nas has a 'system disk', on which the packages and caches are installed. The name of the system disk (actually the first 4 bytes of the raid GUID of the volume) is stored on the flash partition as a symlink (pointing to /i-data/<4 bytes hexcode>, the mountpoint of that volume). As soon as the system disk doesn't exist, and another volume is available on boot, the symlink will be changed, and the other volume will be promoted (and prepared. Existing system volume stuff is reset). So when juggling disks you will almost certain loose your system disk, and so your packages. If you have important stuff, for instance in MySQL, backup it before juggling. Getting that instance of MySQL started again could be not trivial.

All Replies

  • Mijzelf
    Mijzelf Posts: 2,904  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Answer ✓

    'Basic mode' is without raid1, I suppose? In that case you've got 2 volumes. Then 'dd' is not the way to go. I'd exchange one disk, and create a new volume on the new disk. Then use the filebrowser to copy the data over, or do it from an ssh shell (with cp -a). After you're done, put back the old disk and the empty new one, and repeat.

    Finally put both new disks in. If you had custom shares, you'll have to re-enable them, as they are on another volume now.

  • gyulav
    gyulav Posts: 6  Freshman Member
    First Comment Friend Collector

    Yes, 'Basic mode' is without raid1.

    I will try it.

    The "config is on HDD/disk, not in NAS?

    Thanks.

  • Mijzelf
    Mijzelf Posts: 2,904  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Answer ✓

    The "config is on HDD/disk, not in NAS?

    The firmware config is on an internal flashdisk. The configuration of the packages is (mainly?) on the disk, next to the packages.

    On second thought, you will loose your packages. And ZyXEL has shut down their package server. Yet it is possible to install them using a backup I made. If you use dd to copy the disk, I think the packages will remain. But it depends on sequence.

    Background: The nas has a 'system disk', on which the packages and caches are installed. The name of the system disk (actually the first 4 bytes of the raid GUID of the volume) is stored on the flash partition as a symlink (pointing to /i-data/<4 bytes hexcode>, the mountpoint of that volume). As soon as the system disk doesn't exist, and another volume is available on boot, the symlink will be changed, and the other volume will be promoted (and prepared. Existing system volume stuff is reset). So when juggling disks you will almost certain loose your system disk, and so your packages. If you have important stuff, for instance in MySQL, backup it before juggling. Getting that instance of MySQL started again could be not trivial.

  • gyulav
    gyulav Posts: 6  Freshman Member
    First Comment Friend Collector

    I read a lot of posts/comments.

    I have a 2 bay dock station. This station can clone drives.

    I clone the old drive to the new larger drive. Then i delete sda3 partition, create sda3 partition with new end. (start is original)

    Device Start End Sectors Size Type
    /dev/sda1 2048 3999743 3997696 1.9G Linux RAID
    /dev/sda2 3999744 7999487 3999744 1.9G Linux RAID
    /dev/sda3 7999488 3907028991 3899029504 1.8T Linux RAID

    major minor #blocks name


    8 0 1953514584 sda
    8 1 1998848 sda1
    8 2 1999872 sda2
    8 3 1949514752 sda3
    8 16 1953514584 sdb
    8 17 1998848 sdb1
    8 18 1999872 sdb2
    8 19 1949514752 sdb3
    9 0 1997760 md0
    9 1 1998784 md1
    9 2 1949383488 md2
    9 3 1949383488 md3

  • Mijzelf
    Mijzelf Posts: 2,904  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    Is there a question in this post?

  • gyulav
    gyulav Posts: 6  Freshman Member
    First Comment Friend Collector
    edited April 17

    Possible, yes.

    Do you think this would work?

    Must i have a mdadm and resize2fs command after fdisk?

    ('Basic mode' is without raid1)

  • Mijzelf
    Mijzelf Posts: 2,904  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    Yes, would work. Although I would use parted, and not fdisk. Parted has a resize command, 'resizepart'. But it can also be done with fdisk. Make sure the units are sectors, as in your dump.

    Then you have to grow the raid array:

    mdadm --grow --size=max /dev/sda3

    After that assemble the raid array:

    mdadm --assemble /dev/md2 /dev/sda3 --run

    And finally grow the filesystem

    resize2fs /dev/md2

    If you are doing this on the NAS itself, you'll have to unmount the raid array and stop it before you can grow it. I think. Stopping is easy:

    mdadm --stop /dev/md2

    Unmounting is hard. You'd better inject a telnet daemon in /etc/init.d/rc.shutdown, as described here.

  • gyulav
    gyulav Posts: 6  Freshman Member
    First Comment Friend Collector

    Thanks Mijzelf.

    is this the right order ('Basic mode' , not RAID, not JBOD)?

    I take the old disks out from NAS.
    I clone old disks (sda and sdb) to new disks.

    With fdisk i delete sda3 partition, create sda3 partition with new end.
    I will do the same with sdb3.

    mdadm --grow --size=max /dev/sda3
    mdadm --assemble /dev/md2 /dev/sda3 --run
    resize2fs /dev/md3

    mdadm --grow --size=max /dev/sdb3
    mdadm --assemble /dev/md2 /dev/sdb3 --run
    resize2fs /dev/md2

    I put the new disks to the NAS.

    ***********

    If I am doing this on the NAS itself, when i need to umount and stop the device? Before mdadm --grow OR resize2fs?

    I think md3 is with sda3, and md2 is with sdb3:
    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
    md3 : active raid1 sda3[0]
    1949383488 blocks super 1.2 [1/1] [U]

    md2 : active raid1 sdb3[0]
    1949383488 blocks super 1.2 [1/1] [U]

    md1 : active raid1 sda2[0] sdb2[1]
    1998784 blocks super 1.2 [2/2] [UU]

    md0 : active raid1 sda1[0] sdb1[1]
    1997760 blocks super 1.2 [2/2] [UU]

    unused devices: <none>

  • Mijzelf
    Mijzelf Posts: 2,904  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    I made a mistake. The grow command on the raid array is applied to the md device, not the disk(partition).

    So the command to grow it is

    mdadm --grow --size=max /dev/md3

    and then of course the device must not be stopped, as then the md device is gone. To resize the filesystem the volume does not have to be unmounted, but if there are filesystem errors resize2fs will refuse to work, and insist on running e2fsck first. To run that, the volume has to be unmounted.

    So the correct sequence is: clone the disk and put it in the NAS. (I think it's a good idea to never have a disk and it's clone inserted while booting. The NAS reads the GUID of the raid array to decide what to do, and you cloned that GUID.)

    fdisk /dev/sda

    mdadm --grow --size=max /dev/md2

    resize2fs /dev/md2

    After running fdisk check /proc/partitions to see if sda3 has it's new size. If not, you have to reboot first. In that case the partition table on disk is adapted, but the table in kernel memory isn't. And yes, reading /proc/mdstat is the right way to see which partitions are in which raid array (md device).

    (BTW, as you can see in /proc/mdstat a 'Basic volume' is actually a single disk raid1 array. I think this is done for 2 reasons, first to keep the firmware simple. It has only to deal with raid arrays. And second to be able to add redundancy later. It's easy to convert a single disk raid1 array to a multidisk one.

Consumer Product Help Center