David2012

Comments

  • Thank you, Mijzelf! After i added --force before setting the number of members, it worked! Now I have healthy volume and the second disk "free". Thank you again, I really appreciate your help
  • Here's an output: ~ $ cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]md2 : active raid1 sda3[2] 3902886912 blocks super 1.2 [2/1] [_U] md1 : active raid1 sda2[2] 1998784 blocks super 1.2 [2/1] [_U] md0 : active raid1 sda1[2] 1997760 blocks super 1.2 [2/1] [_U] unused devices: <none>
  • I executed the commands and this is the output: ~ # mdadm /dev/md2 --fail /dev/sdb3 --remove /dev/sdb3mdadm: set /dev/sdb3 faulty in /dev/md2mdadm: hot removed /dev/sdb3 from /dev/md2~ # dd if=/dev/zero of=/dev/sdb count=6464+0 records in64+0 records out32768 bytes (32.0KB) copied, 0.002049 seconds, 15.3MB/s~ # sync~ #…
  • Hi Mijzelf, Thank you for your reply! Sorry for the late response, but I've been waiting for the NAS to repair after I pulled out the new disk this morning and put it back again. Here's a screenshot of the output you asked for: So, what's the next step?
Avatar