Comments
-
Hi I finally got back my data but the recovery process was extremely long and difficult and only works if the actions are taken in the right order: [before that I have taken all of the above-mentioned steps but I couldn't get it working] equipment: * Zyxel 540 NAS with 4 drives inside 2 TB (wd green) each, drives were…
-
Hi,neither lvscan <div>~ # lvscan -a -b -v<br></div><div> Using logical volume(s) on command line.<br></div><div> Finding all volume groups.<br></div><div> No volume groups found.<br></div> vgscan <div>/dev # vgscan --mknodes -v<br></div><div> Wiping cache of LVM-capable devices<br></div><div> Wiping internal VG…
-
BTW is it possible to stop the buzzer from the command line?
-
Well, I know how it sounds but this exactly what happened. Before execution of the --create command I had 1st volume accessible, and the second was unavailable. Afterwards, both went down. As soon as I'm at home I'll come up with the output from lvscan, Thanks, B.
-
Hi, after endless attempts (all in vain) to reassemble the matrix I'm just wondering what type of raid it might've been? I started up with 2 volumes, each comprising of 2 drives. Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]<br>md3 : active raid1 sdc3[0] sdd3[1] 1949383488 blocks super 1.2 [2/2]…
-
I'm sure that it was raid10. Creating one isn't anything exceptionally difficult. If one drive fails (what has actually happened) raid10 enters "degraded mode". But on my attempt to replace the faulty drive some of my shares vanished (what shouldn't have happened) and rebuilding the array didn't help. mdadm automatic…
-
I don't expect it to work since I've destroyed my raid by mdadm --create --assume-clean --level=1 --raid-devices=2 --metadata=1.2 /dev/md2 missing /dev/sdb3<br> but the output is as follows: <div>mdadm: No md superblock detected on /dev/md2. </div><div>mdadm: No md superblock detected on /dev/md3.<br></div>
-
Dunno if it helps but my initial configuration was raid10
-
Yes, I did, at least I tried: <div>~ # cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]</div><div><br>md3 : active raid1 sdc3[0] sdd3[1]</div><div>1949383488 blocks super 1.2 [2/2] [UU]</div><div>md2 : active raid1 sdb3[1]</div><div>1949383488 blocks super 1.2 [2/1]…
-
I guess that now I do have something to worry about: the NAS keeps beeping and all the shares vanished. I guess that there's nothing left but to try to recover the files I guess. I'll try to mount the drives to another pc running linux and recover the files :( Thank you, anyway.
-
Hi Mijzelf, initially, I was getting a reply that sdb3 is busy <div>~ # mdadm --assemble /dev/md2 /dev/sdb3 --force<br><br></div><div>mdadm: /dev/sdb3 is busy - skipping</div><div></div> so I stopped md2 <div>~ # mdadm --stop /dev/md2<br><br></div><div>mdadm: stopped /dev/md2</div> I get that there are no drives: mdadm:…
-
Hi Mijzelf, it looks as follows: <div>mdadm: cannot open /dev/sda3: No such device or address</div><div>/dev/sdb3:</div><div> Magic : a92b4efc</div><div> Version : 1.2</div><div> Feature Map : 0x2</div><div> Array UUID : d69d46c2:8e90e04e:a2188060:2c230b03</div><div> Name : NAS540:2</div><div> Creation Time : Fri Jul 17…
-
Hi guys, I'm facing a similar issue: my drive (also md2) went down and I got it replaced with a new one <div>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]</div><div>md3 : active raid1 sdc3[0] sdd3[1]</div><div> 1949383488 blocks super 1.2 [2/2] [UU]</div><div><br></div><div>md2 : inactive…