Mr_C  Freshman Member

Comments

  • Sorry for the silence - I've been spending the last few days trying to get off what I can (just not my iTunes library annoyingly). I think it's time to write it off, especially as I can't seem to get dd_rescue to install.
  • Hold that thought, now drive OB is showing as faulty and the RAID has gone from repairing to degraded. AAAAAAAAAARGHHHHHHHHHHH!!!!!!!! Two knackered disks? Jeez.
  • D'oh. Well that's embarrassing. However, thanks to you, I now have a working volume! What's more, I've switched back in the replacement disk and it's recovering. I do have a whole directory which isn't showing up however, despite the capacity of the volume suggesting it's there though... any more suggestions you clever…
  • With just one disk, OB, results as follows: ~ # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] md0 : inactive sda2[2](S) 1952996928 blocks super 1.2 ~ # cat /proc/mounts rootfs / rootfs rw 0 0 /proc /proc proc rw,relatime 0 0 /sys /sys sysfs rw,relatime 0 0 none /proc/bus/usb usbfs rw,relatime 0 0 devpts…
  • To confirm, I have: OA: Old disk, showing as faulty (hardware error) OB: Old disk, not faulty, shows as rebuilding constantly N: New disk, originally failed to build, only builds from OA but constantly fails to do soWhen NAS contains OA and OB, the volume shows as degraded, rebuilding with both of these disks inserted . If…
  • Hello again - sorry for the delay in responding. I'm having difficulty unmounting : mdadm --stop /dev/md0 mdadm: Cannot get exclusive access to /dev/md0: possibly it is still in use. Do I need to unmount before trying the next step? i.e. mdadm --create --assume-clean --level=1 --raid-devices=2 --metadata=1.2 /dev/md0…
  • Thanks for that. Just to be clear though, the NAS currently has two drives - sdb2 is the faulting drive, sda2 is the drive originally in the volume currently "rebuilding". The new drive is not in the NAS presently. So, do I follow the process you've suggested above or switch sda2 out for the replacement drive? I think the…
  • This would be the error. It is the originally faulty disk mind you. dd if=/dev/sdb2 of=/dev/null bs=16M count=64 dd: /dev/sdb2: Input/output error I'm just hoping the rebuilding spare is going to get there....
  • So, after a bit more messing about I've found the following - can I presume that the last line "spare rebuilding" is a dead giveaway as to what the wretched thing is doing? D'oh. ~ # mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Sep 14 10:15:30 2014 Raid Level : raid1 Array Size : 1952996792 (1862.52…
  • Hello again, so, various faffing about later and: mdadm --remove /dev/md0 /dev/sdb2 mdadm: hot remove failed for /dev/sdb2: Device or resource busy I can't seem to get past this. I've also tried getting the array to repair again through the UI but it got precisely nowhere before it bombed out unaided. Below following…
  • Hi again - sorry for the delayed response and thanks again for your help. So, I've switched back in what I had thought to be the originally faulty disk in slot 1 (where it cam from) and the replacement disk in slot 2. The result is that I now have a red LED on slot 1 but the NAS is seemingly up. The UI is telling me that…
  • Mijzelf - you are indeed a hero for getting back to me. So, the output for sdb2 is below. In answer to the other question, I swapped out the disk which was failing (sdb1 presumably) as the volume wouldn't repair properly. I've tried putting the failed drive back into the unit and trying to repair through the UI but this…
  • BTW, output per the above in case anyone's keen...: ~ $ cat /proc/partitions major minor #blocks name 7 0 143360 loop0 8 0 1953514584 sda 8 1 1953512448 sda1 8 16 1953514584 sdb 8 17 514048 sdb1 8 18 1952997952 sdb2 31 0 1024 mtdblock0 31 1 512 mtdblock1 31 2 512 mtdblock2 31 3 512 mtdblock3 31 4 10240 mtdblock4 31 5 10240…
  • Hi - I've exactly the same problem. Just raised a case with Zyxel. Any suggestions would be life saving....
Avatar