NAS540: Volume down, repairing failed, how to restore data?
All Replies
-
Maybe
<div>mdadm --stop /dev/md2</div><div></div>
first?0 -
I did it already# mdadm --stop /dev/md2mdadm: stopped /dev/md2~ # mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sda3 /dev/sdb3 missing /dev/sdd3mdadm: super1.x cannot open /dev/sda3: Device or resource busymdadm: /dev/sda3 is not suitable for this array.mdadm: super1.x cannot open /dev/sdb3: Device or resource busymdadm: /dev/sdb3 is not suitable for this array.mdadm: super1.x cannot open /dev/sdd3: Device or resource busymdadm: /dev/sdd3 is not suitable for this array.mdadm: create aborted
Is it maybe better to mount all drives on a computer with a Linux live instead of working directly on the nas?0 -
Maybe another md device is occupying the partitions? Have a look wothIt shouldn't make any difference. Most people have problems to connect 4 sata disks to a computer. If you have the possibility, be my guest. But beware, somehow some of your raidmembers had a bitmap in the header, while at least one hadn't. I'm pretty sure the NAS doesn't add bitmaps to the arrays.
cat /proc/mdstat
Is it maybe better to mount all drives on a computer with a Linux live instead of working directly on the nas?
0 -
This is what cat /proc/mdstat says~ # cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]md3 : active raid5 sda3[1] sdb3[3] sdd3[2]5848147968 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]bitmap: 0/15 pages [0KB], 65536KB chunkmd2 : inactive sdc3[0](S)1949383680 blocks super 1.2md1 : active raid1 sdb2[4] sdd2[6] sdc2[5] sda2[7]1998784 blocks super 1.2 [4/4] [UUUU]md0 : active raid1 sdb1[4] sdd1[6] sdc1[5] sda1[7]1997760 blocks super 1.2 [4/4] [UUUU]
So I guess md3 is the one working...right?0 -
Well, I did all the 6 possible combination, leave it "missing" in the third position and switching sda sdb and sdd accordingly, always stopping before /dev/md2 ( or 3 sometimes, nas swtiched the active one, dunno why ) but I have the same result, can't rebuild the raid...any new idea?0
-
leave it "missing" in the third position
Why the 3rd? Do you know something you didn't post?
0 -
I mean, since you said 3 drives on 4 are active, I marked the one with wrong details with missing...am I wrong?
This is what I tried :
mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sdd3 /dev/sdb3 missing /dev/sda3
Where drive mounted as sdc3 has index different from the others
0 -
Right. You had sd[abc]3, which couldn't possibly contain a header generated by the NAS, as their headers contained a bitmap. For what it's worth, they were 'Active device' 1,2 and 3 of a 4 disk array.sdd3 looked like an original, NAS-made member, but it's timestamps made me doubt, created on Sun Oct 13 20:01:38 2019, last updated on Sun Oct 13 20:11:28 2019. So that disk was only 10 minutes part of an array.I asked you about that status, but you didn't answer.Partition sdd3 was Active device 0, so if that disk is the only one from which the original position is known, the 1st disk is missing. But if the header on sdd3 is bogus, there is absolutely nothing to say about which disk is missing, unless you have extra info.
0 -
Sorry I think I miss the part of status you asked, what do you need precisely?0
-
So I guess you have no disk left containing an original raid header? Or is your original array created on Oct 13?0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 151 Nebula Ideas
- 98 Nebula Status and Incidents
- 5.7K Security
- 277 USG FLEX H Series
- 277 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.4K Consumer Product
- 250 Service & License
- 395 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 75 Security Highlight