NAS540: Volume down, repairing failed, how to restore data?
All Replies
-
At this part of the quest, I really think not.
Don't know why, I was able to see data from the Nas Manager, suddendly I was not.
So I can assume I have no disk containing an original raid header.
How can I check it?0 -
How can I check it?
Not. The header looks valid for a NAS disk, except for the timestamps. If you don't know if the timestamps could be valid, there is no way to check for validity.
And that means that you can't know for sure which disk is missing.
0 -
So, what can I do to recover back data?0
-
I see no simpler way than trying to mount them all 24 permutations.
0 -
Maybe I got something, on this combination C / A / B / D
Nas said is possible to repair the volume ( even if I can't ), I was confident to reach data, but again nothing appens.
I was going to change configuration, but when I give the command "mdadm --stop /dev/md2" it responds with this :"mdadm: Cannot get exclusive access to /dev/md2:Perhaps a running process, mounted filesystem or active volume group?"
What now?0 -
Well, are you sure it isn't mounted? Have a look
cat /proc/mounts
0 -
~ # cat /proc/mountsrootfs / rootfs rw 0 0/proc /proc proc rw,relatime 0 0/sys /sys sysfs rw,relatime 0 0none /proc/bus/usb usbfs rw,relatime 0 0devpts /dev/pts devpts rw,relatime,mode=600 0 0ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0/dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0/dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0configfs /sys/kernel/config configfs rw,relatime 0 00
-
Apparently not. Well, something is keeping the md array occupied. Probably e2fsck. Have a look:
lsof | grep /dev/md2
0 -
It shows nothing
0 -
And are you sure you still can't stop md2? How about mounting? If mounting fails, what is the output of
e2fsck -n /dev/md2
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 151 Nebula Ideas
- 98 Nebula Status and Incidents
- 5.7K Security
- 277 USG FLEX H Series
- 277 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.4K Consumer Product
- 250 Service & License
- 395 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 75 Security Highlight