NAS540: Volume down, repairing failed, how to restore data?
All Replies
-
BTW is it possible to stop the buzzer from the command line?0
-
Yes.Will stop the buzzer, and remove the possibility for the firmware to start it again. Till the next reboot.
buzzerc -s && mv /sbin/buzzerc /sbin/buzzerc.old
1 -
Hi,neither
lvscan<div>~ # lvscan -a -b -v<br></div><div> Using logical volume(s) on command line.<br></div><div> Finding all volume groups.<br></div><div> No volume groups found.<br></div>
vgscan<div>/dev # vgscan --mknodes -v<br></div><div> Wiping cache of LVM-capable devices<br></div><div> Wiping internal VG cache<br></div><div> Reading all physical volumes. This may take a while...<br></div><div> Using volume group(s) on command line.<br></div><div> Finding all volume groups.<br></div><div> No volume groups found.<br></div><div> Creating directory "/dev/mapper"<br></div><div> Creating device /dev/mapper/control (10, 236)<br></div><div> Using logical volume(s) on command line.<br></div><div> Finding all volume groups.<br></div><div> No volume groups found.<br></div>
I got lost a bit so I'm pasting lvmdiskscan: sde is an additional, external usb drive - I intended to robocopy my files to that drive, but I can't access them:<div>~ # lvmdiskscan<br></div><div> /dev/loop0 [ 144.00 MiB]<br></div><div> /dev/sda [ 1.82 TiB]<br></div><div> /dev/md0 [ 1.91 GiB]<br></div><div> /dev/md1 [ 1.91 GiB]<br></div><div> /dev/md2 [ 1.82 TiB]<br></div><div> /dev/md3 [ 1.82 TiB]<br></div><div> /dev/sde1 [ 128.00 MiB]<br></div><div> /dev/sde2 [ 3.64 TiB]<br></div><div> 1 disk<br></div><div> 7 partitions<br></div><div> 0 LVM physical volume whole disks<br></div><div> 0 LVM physical volumes<br></div>
Is there an easy way to copy my files from these drives?0 -
Is there an easy way to copy my files from these drives?
When the filesystem can't be mounted, and it's even not clear where the filesystem is, the only way I can think of is low level recovery. Depending on the nature of the data a tool like PhotoRec can recover much, and it's not hard to use.
Problem is that without help of the filesystem only files can be restored, not the metadata (filename, timestamp, pathname) as these are stored in the filesystem. So you end up with a (big?) bunch of files, having a random name, and fortunately a descriptive extension. (Although I wouldn't be surprised if a docx document is restored as zip, as it is actually a zipfile)
0 -
Hi I finally got back my data but the recovery process was extremely long and difficult and only works if the actions are taken in the right order:
[before that I have taken all of the above-mentioned steps but I couldn't get it working]
equipment:- Zyxel 540 NAS with 4 drives inside 2 TB (wd green) each, drives were grouped into 2 volumes consisting of 2 drives in raid10 (each drive from a volume is mirrored on the other one)
- 4TB usb drive (seagate)
2. then I mounted 1st drive from the 1st volume from the raid array
3. I mounted the usb drive and created ext4 partition
4. I rsynced these 2 drives, r sync operation took me 4 days and nights (which is extremely strange and long)
5. I unmounted the 1st drive from the 1st volume
6. I mounted 1st drive from the 2nd volume from the raid array
7. I rsynced this drive with the usb drive , and this time r sync operation took 3 days and nights (which is also too long )
8. I unmounted the 1st drive from the 1st volume
9. I formated the former NAS drives via ssh and reinitialised volumes in the webface
10. I'm currently running rsync to synchronise the data on NAS with those on the usb drive.
This ends my problems. Thank you for your time.0 -
Hello there.
I'm facing this issue.
I got a Zyxel Nas 542 with 4x2TB hard drives in raid5 config, full working.
I just wanted to replace one drive with a WD RED, so I got one from Amazon, replaced, and the NAS told me a new drive was found and to start to re-sync the array...aaaand it's gone.
The raid was not working, so I following this guide a bit, and seems the archive was up again with only 3 disks, but every try to re-sync again was a failure.BTW, I was able to see files from the built-in navigator, but unable to copy it or reach from any machine ( neither Windows neither Linux )
Now, don't know why, I'm not able to do anything, the NAS is just beeping and doesn't mount the raid...so I'm out of options
I've put again all the original drives, right now I'm not sure about the order
I've got some data I want to recover, can someone help me?
I'm posting the result of mdadm --examine, thanks for your help/dev/sda3:Magic : a92b4efcVersion : 1.2Feature Map : 0x1Array UUID : 3528044c:5d163b06:70ef5310:ad5f312dName : ubuntu:metadata=1.2Creation Time : Fri Oct 25 12:35:07 2019Raid Level : raid5Raid Devices : 4Avail Dev Size : 3898765312 (1859.08 GiB 1996.17 GB)Array Size : 5848147968 (5577.23 GiB 5988.50 GB)Data Offset : 264192 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 22b227c9:0c224a81:07fb8957:141e2357Internal Bitmap : 8 sectors from superblockUpdate Time : Fri Oct 25 13:23:26 2019Checksum : 3bb16900 - correctEvents : 12Layout : left-symmetricChunk Size : 64KDevice Role : Active device 1Array State : .AAA ('A' == active, '.' == missing)/dev/sdb3:Magic : a92b4efcVersion : 1.2Feature Map : 0x1Array UUID : 3528044c:5d163b06:70ef5310:ad5f312dName : ubuntu:metadata=1.2Creation Time : Fri Oct 25 12:35:07 2019Raid Level : raid5Raid Devices : 4Avail Dev Size : 3898765312 (1859.08 GiB 1996.17 GB)Array Size : 5848147968 (5577.23 GiB 5988.50 GB)Data Offset : 264192 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 0fba7ad7:3f1dd2a5:8c82adfb:b6b19967Internal Bitmap : 8 sectors from superblockUpdate Time : Fri Oct 25 13:23:26 2019Checksum : 23268749 - correctEvents : 12Layout : left-symmetricChunk Size : 64KDevice Role : Active device 2Array State : .AAA ('A' == active, '.' == missing)/dev/sdc3:Magic : a92b4efcVersion : 1.2Feature Map : 0x1Array UUID : 3528044c:5d163b06:70ef5310:ad5f312dName : ubuntu:metadata=1.2Creation Time : Fri Oct 25 12:35:07 2019Raid Level : raid5Raid Devices : 4Avail Dev Size : 3898765312 (1859.08 GiB 1996.17 GB)Array Size : 5848147968 (5577.23 GiB 5988.50 GB)Data Offset : 264192 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : e6bf9f92:31ce7114:90ea27b0:601e0019Internal Bitmap : 8 sectors from superblockUpdate Time : Fri Oct 25 13:23:26 2019Checksum : b2cc12bf - correctEvents : 12Layout : left-symmetricChunk Size : 64KDevice Role : Active device 3Array State : .AAA ('A' == active, '.' == missing)/dev/sdd3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 913c687f:d215eba2:93c38d9a:37ca10ccName : NAS542:2 (local to host NAS542)Creation Time : Sun Oct 13 20:01:38 2019Raid Level : raid5Raid Devices : 4Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)Array Size : 5848150464 (5577.23 GiB 5988.51 GB)Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : d1b73e24:349c517d:b4009bb5:a93f95f5Update Time : Sun Oct 13 20:11:28 2019Checksum : 72411265 - correctEvents : 134Layout : left-symmetricChunk Size : 64KDevice Role : Active device 0Array State : AAAA ('A' == active, '.' == missing)
0 -
I see 3 partitions sd[abc]3 which are part of a raid array created on Fri Oct 25 12:35:07 2019, and sdd3 with is part of array created on Sun Oct 13 20:01:38 2019. So I guess you have no disk left containing an original raid header? Or is your original array created on Oct 13?'Array sdd3' has a data offset of 262144 sectors, while sd[abc]3 has a data offset of 264192 sectors, maybe due to the internal bitmap, which is AFAIK not an option on the 54x.So if sdd3 is original, the first ~2000 sectors, which is 1MB, of the first part of the original filesystem is occupied by the raid header. Don't know if that is a real problem. The raidheader contains mainly nothing, but I don't know if it's also zeroed out.If you lost your original disk order, theoretically you'll have to try each order, until you get something which contains a valid filesystem. And it should mount right away. A wrong order can seem to contain a valid filesystem, but is unmountable, and repairing will destroy everything.The number of possibilities on a 4 disk system is 4! is 24.When sdd3 is original, we know that was 'Active device 0', so only 3! = 6 possibilities are left.
0 -
Well, I tried the solution wit the 3 disks..nothing appened...
Now I'm trying the combinations with four disks, but I noticed something bad...3 disks out of 4 are marked as "spare", though I didn't make any changes, but mdadm --examine still gives me the same information...0 -
Spare? What exactly are you doing? Juggling the physical disks? The idea is to create an array with 'mdadm --create ...' with the 3 partitions and a 'missing' in different sequences. That should never give a spare.Physically moving the disks should have no effect at all, as threir role in the array is written in the header.0
-
Ok I got it.
Right now I'm having this
mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sda3 /dev/sdb3 missing /dev/sdd3mdadm: super1.x cannot open /dev/sda3: Device or resource busymdadm: /dev/sda3 is not suitable for this array.mdadm: super1.x cannot open /dev/sdb3: Device or resource busymdadm: /dev/sdb3 is not suitable for this array.mdadm: super1.x cannot open /dev/sdd3: Device or resource busymdadm: /dev/sdd3 is not suitable for this array.mdadm: create aborted0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 151 Nebula Ideas
- 98 Nebula Status and Incidents
- 5.7K Security
- 277 USG FLEX H Series
- 277 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.4K Consumer Product
- 250 Service & License
- 395 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 75 Security Highlight