Raid5 with 4x 4TB switch in 3x 12TB - NAS542

chris24xx Posts: 1
edited June 2022 in Personal Cloud Storage
Hello, I would like to exchange the existing Raid5 with 4x 4TB disks into 3x 12TB disks due to lack of space.
In the meantime I have swapped the first two Hds into 12TB, that worked great. But now the third swap is coming up and I have no idea how to remove the 4th 4TB disk from the Raid5.
Has anyone here already done that?

All Replies

  • Mijzelf
    Mijzelf Posts: 2,645  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    edited June 2022
    I haven't done it (actually I won't use raid when I can prevent it), but I can tell you it's not trivial.
    To convert a 4 disk raid5 into a 3 disk raid5 you have to
    1. shrink the filesystem
    2. shrink the raid array
    3. fail one disk
    4. reshuffle the degraded 4 disk array to a healthy 3 disk array.
    2 is necessary because a 3 disk raid5 can contain less data than a 4 disk one, given the same disk size. So step 4 isn't possible if you haven't shrunk the array size before.
    Step 1 is necessary because of step 2. You can't simply cut a part of the filesystem. To be able to shrink the filesystem it should have enough free space. (Which will be a challenge if you started this because you run out of space.)

    When you have a healty 3 disk raid5 on disks have unused space, you can enlarge it
    1. grow the partitions containing the array members
    2. grow the raid array
    3. grow the filesystem
    The firmware can do these 3 steps for you.

    The first 4 steps should be something like:
    # 1
    umount /dev/md2  # I don't think you can shrink a mounted filesystem.
    df -B 4k /dev/md2 # Get the size of the partition in 4k blocks
    e2fsck /dev/md2 # you can't resize a filesystem with errors
    resize2fs /dev/md2 <size> # where size <= 2/3 of the current size.
    e2fsck /dev/md2 # not strickly necessary, but if there is a filesystem error you want to repair it now.
    # 2
    mdadm --grow /dev/md2 --size=<size*4> # size is here in kB, while the filesystem resize was in 4kB.
    # 3
    mdadm /dev/md2 --fail /dev/sdd3 --remove /dev/sdd3 # assuming /dev/sdd3 is the remaining 4TB disk
    # 4
    mdadm --grow /dev/md2 --raid-devices=3 # This will take a lot of time, as almost every chunk will have to be moved, and parity calculated.

    Some reading about managing raid arrays: A guide to mdadm.
    Some other info: resize2fs, e2fsck.

    By the way, the firmware doesn't support volumes >16TiB (which is 17.9TB), so you won't be able to let the firmware use the full 24TB in one volume. Instead it is supposed to offer you an interface to create an extra 6TB volume in the remaining space.

    /Edit: after rethinking this, step 4 doesn't seem right. How can I tell mdam to convert a degraded 4 disk raid5 into a healthy 3 disk raid5 without telling it it can use more space? So either that is implicit, or you have to tell it:
    mdadm --grow /dev/md2 --raid-devices=3 --size=<size*3/2>
    But if you can specify a new size here, you could simply skip the whole resizing in 1-3, as the disks have enough space. Instead you could enlarge the partitions (which are now clones of the 4TB disks), and let it resize on the fly:
    fdisk /dev/sda # use fdisk to enlarge the 3 partition to 7TB, this way there is
    fdisk /dev/sdb # enough space for the conversion, while there is still space left,
    fdisk /dev/sdc # so the firmware will offer to enlarge the partitions even more,
                           # taking care of growing of the filesystem.
    mdadm /dev/md2 --fail /dev/sdd3 --remove /dev/sdd3 # assuming sdd is the 4TB disk
    reboot # in case the new partition size isn't propagated everywhere. You can also remove the 4TB disk now
    mdadm --grow /dev/md2 --raid-devices=3 --size=max # size=max will tell mdadm to use the partition sizes.

Consumer Product Help Center