Comments
-
Good news! I finally have my data back! I tried to recover the superblock index with e2fsck using backups listed, but none of them worked :( So I decided to come back to the old plan and try again logical devs combinations. The procedure I followed is this one : 1) Deactivating the volume with vgchange -an 2) Stop md2 3)…
-
Here it is, sir : ~ # e2fsck -n /dev/md2e2fsck 1.42.12 (29-Aug-2014)Warning! /dev/md2 is in use.ext2fs_open2: Bad magic number in super-blocke2fsck: Superblock invalid, trying backup blocks...e2fsck: Bad magic number in super-block while trying to open /dev/md2 The superblock could not be read or does not describe a valid…
-
It shows nothing :(
-
~ # cat /proc/mountsrootfs / rootfs rw 0 0/proc /proc proc rw,relatime 0 0/sys /sys sysfs rw,relatime 0 0none /proc/bus/usb usbfs rw,relatime 0 0devpts /dev/pts devpts rw,relatime,mode=600 0 0ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0/dev/md0 /firmware/mnt/sysdisk ext4…
-
Maybe I got something, on this combination C / A / B / D Nas said is possible to repair the volume ( even if I can't ), I was confident to reach data, but again nothing appens. I was going to change configuration, but when I give the command "mdadm --stop /dev/md2" it responds with this : "mdadm: Cannot get exclusive…
-
So, what can I do to recover back data?
-
At this part of the quest, I really think not. Don't know why, I was able to see data from the Nas Manager, suddendly I was not. So I can assume I have no disk containing an original raid header. How can I check it?
-
Sorry I think I miss the part of status you asked, what do you need precisely?
-
I mean, since you said 3 drives on 4 are active, I marked the one with wrong details with missing...am I wrong? This is what I tried : mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sdd3 /dev/sdb3 missing /dev/sda3 Where drive mounted as sdc3 has…
-
Well, I did all the 6 possible combination, leave it "missing" in the third position and switching sda sdb and sdd accordingly, always stopping before /dev/md2 ( or 3 sometimes, nas swtiched the active one, dunno why ) but I have the same result, can't rebuild the raid...any new idea?
-
This is what cat /proc/mdstat says ~ # cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md3 : active raid5 sda3[1] sdb3[3] sdd3[2] 5848147968 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU] bitmap: 0/15 pages [0KB], 65536KB chunk md2 : inactive sdc3[0](S) 1949383680…
-
I did it already # mdadm --stop /dev/md2mdadm: stopped /dev/md2~ # mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sda3 /dev/sdb3 missing /dev/sdd3mdadm: super1.x cannot open /dev/sda3: Device or resource busymdadm: /dev/sda3 is not suitable for this…
-
Ok I got it. Right now I'm having this mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sda3 /dev/sdb3 missing /dev/sdd3 mdadm: super1.x cannot open /dev/sda3: Device or resource busymdadm: /dev/sda3 is not suitable for this array.mdadm: super1.x…
-
Well, I tried the solution wit the 3 disks..nothing appened... Now I'm trying the combinations with four disks, but I noticed something bad...3 disks out of 4 are marked as "spare", though I didn't make any changes, but mdadm --examine still gives me the same information...
-
Hello there. I'm facing this issue. I got a Zyxel Nas 542 with 4x2TB hard drives in raid5 config, full working. I just wanted to replace one drive with a WD RED, so I got one from Amazon, replaced, and the NAS told me a new drive was found and to start to re-sync the array...aaaand it's gone. The raid was not working, so I…