Remove disk from Volume1 (RAID 1)

Hi!
I have just bought the second drive for my NAS326 server. "Accidentally", I added it as a Volume 1 (RAID 1) second disk, so it didn't expand my capacity, it just added free space for backup (I think). The second disk is empty and I would like to remove it from the Volume and add it as a free volume space to use. Could you advise me, how to do it? I have my "Manage Volume" option grayed out, but the second disk is empty, so it wouldn't be any risk for losing any data (form the first disk I hope). 
Is it possible? I have an access to SSH protocole to use any commands.
Please advise

Accepted Solution

  • Mijzelf
    Mijzelf Posts: 2,600  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Ah right. So the md2 array still has 2 assigned members. Apparently --remove doesn't decrease the member count.
    After re-reading the man page of mdadm, I found that the command to set the number of members to 1 should be:
    su
    mdadm --grow /dev/md2 --raid-devices=1

All Replies

  • Mijzelf
    Mijzelf Posts: 2,600  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    When it's a raid1 volume, the new disk is not empty, but it's a mirror of the other disk. In that case you can simply pull it, but the box will then warn the array is degraded (what it is, of course).
    On the other hand, when it's a linear array, the filesystem is stretched to span both disks, but the part of the filesystem on the new disk will be mainly empty. If you pull the disk, the filesystem will be damaged (the stored size will no longer meet the physical size). So in that case the filesystem first has to be shrinked, then the disk removed from the array.
    To first find out what kind of array it is, can you post the output of

    cat /proc/mdstat

  • David2012
    David2012 Posts: 5
    Hi Mijzelf,
    Thank you for your reply! Sorry for the late response, but I've been waiting for the NAS to repair after I pulled out the new disk this morning and put it back again.
    Here's a screenshot of the output you asked for:



    So, what's the next step?
  • Mijzelf
    Mijzelf Posts: 2,600  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    OK, it's raid1 (the data volume is md2). Do you care which disk is what? If you don't, you can remove one disk from 'multidisk' md2:
    su
    mdadm /dev/md2 --fail /dev/sdb3 --remove /dev/sdb3
    
    After this, md2 is a single disk raid1 array, containing only disk sda, and the firmware is not supposed to complain about array degradation.
    But I don't know if disk sdb is now recognized as a new, empty disk. So you can delete it's partition table
    dd if=/dev/zero of=/dev/sdb count=64
    
    This will wipe the first 64 sectors of disk sdb, being the partition table. As this is not detected by the firmware, a reboot is necessary:
    sync
    reboot
    
    The problem here is that I don't know which disk is sdb. When the disks have equal size it doesn't really matter, as both disks have identical content. If you are OK with that, you can execute as provided. It *might* be the second disk, but I'm not sure about that.
  • David2012
    David2012 Posts: 5
    I executed the commands and this is the output:

    ~ # mdadm /dev/md2 --fail /dev/sdb3 --remove /dev/sdb3
    mdadm: set /dev/sdb3 faulty in /dev/md2
    mdadm: hot removed /dev/sdb3 from /dev/md2
    ~ # dd if=/dev/zero of=/dev/sdb count=64
    64+0 records in
    64+0 records out
    32768 bytes (32.0KB) copied, 0.002049 seconds, 15.3MB/s
    ~ # sync
    ~ # reboot

    Now I have Disk1 degraded and the option to repair is to replace it with the Disk2. I can see that the data is not damaged and I have an access to them.
    What should I do now?

    Thank you for your help!
  • Mijzelf
    Mijzelf Posts: 2,600  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    OK, that is expected behavior, except for the degraded part. It shouldn't be degraded, just a single disk array (which is actually the same, except for the raid header which tells the array should have more members).
    Does mdstat tell that md2 is missing a member?
    cat /proc/mdstat
  • David2012
    David2012 Posts: 5
    Here's an output:

    ~ $ cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
    md2 : active raid1 sda3[2]
          3902886912 blocks super 1.2 [2/1] [_U]

    md1 : active raid1 sda2[2]
          1998784 blocks super 1.2 [2/1] [_U]

    md0 : active raid1 sda1[2]
          1997760 blocks super 1.2 [2/1] [_U]

    unused devices: <none>

  • Mijzelf
    Mijzelf Posts: 2,600  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Ah right. So the md2 array still has 2 assigned members. Apparently --remove doesn't decrease the member count.
    After re-reading the man page of mdadm, I found that the command to set the number of members to 1 should be:
    su
    mdadm --grow /dev/md2 --raid-devices=1
  • David2012
    David2012 Posts: 5
    Thank you, Mijzelf! After i added --force before setting the number of members, it worked!
    Now I have healthy volume and the second disk "free".
    Thank you again, I really appreciate your help

Consumer Product Help Center