NAS326: Restore Deleted JBOD Volume
SpamMaster50000
Posts: 15 Freshman Member
Hi,
today i had a horrible accident.
I bought a second NAS326 (here DEV2) and installed it. The new device (DEV2) was on IP 192.168.0.XX. I struggled because i had bought two 14TB drives and the initial plan was to do a JBOD with them. During installation i learned about the VOLUME restriction of 16TB (previously i thoght it to be a drive restriction). Nevertheless i carried on and wanted to create two volumes (one for each 14TB drive). So i created the first volume on drive 1 (standard). During the setup i had to restart the DEV2.
Somehow and unnoticed by me during the reboot my browser thought it's a nice idea to switch to the old NAS326 (here DEV1), which hat the IP 192.168.0.YY. But since the IP of the Device was hidden in the browser (just showed NAS326/...) i could not see that.
Then i looked in the volume and saw that it was "not properly" created (i know in retrospect this is where i should have stopped!!!). However i thought nothing by it (after all the DEV2 disks are blank so i just thought i'll start fresh and delete that volume. However secretely i was really deleting the DEV1 JBOD volume! At next reboot the wrong NAS beeped, and i noticed what i just had done...
So the question is now: How can i repair/restore a deleted JBOD volume?
Hardware should be fine. No erroneus sectors or etc.. I immediately stopped after the deletion and wrote this thread. PLEASE HELP!
Thanks in advance
today i had a horrible accident.
I bought a second NAS326 (here DEV2) and installed it. The new device (DEV2) was on IP 192.168.0.XX. I struggled because i had bought two 14TB drives and the initial plan was to do a JBOD with them. During installation i learned about the VOLUME restriction of 16TB (previously i thoght it to be a drive restriction). Nevertheless i carried on and wanted to create two volumes (one for each 14TB drive). So i created the first volume on drive 1 (standard). During the setup i had to restart the DEV2.
Somehow and unnoticed by me during the reboot my browser thought it's a nice idea to switch to the old NAS326 (here DEV1), which hat the IP 192.168.0.YY. But since the IP of the Device was hidden in the browser (just showed NAS326/...) i could not see that.
Then i looked in the volume and saw that it was "not properly" created (i know in retrospect this is where i should have stopped!!!). However i thought nothing by it (after all the DEV2 disks are blank so i just thought i'll start fresh and delete that volume. However secretely i was really deleting the DEV1 JBOD volume! At next reboot the wrong NAS beeped, and i noticed what i just had done...
So the question is now: How can i repair/restore a deleted JBOD volume?
Hardware should be fine. No erroneus sectors or etc.. I immediately stopped after the deletion and wrote this thread. PLEASE HELP!
Thanks in advance
0
All Replies
-
I don't think that it's a very complex problem. A data volume on a ZyXEL NAS is always in a raid array. A JBOD is basically a linear array. (On a ZyXEL. On other brands JBOD can mean single disk volumes). When I would have written the firmware, deleting a volume would mean zeroing out the raid headers. I didn't write it (duh!), but I have no reason to think ZyXEL has done it different. So basically you have to restore the raid headers. A problem might be that I don't know what a JBOD header looks like. When you didn't do anything with DEV2 yet, it might be an idea to re-create the JBOD array, and look at the headers: enable the ssh server, login over ssh, and executesumdadm --examine /dev/sd[ab]3If the disks in DEV2 aren't available anymore, we can build an array without creating it, to see if the resulting volume is mountable. This way several settings can be tried without touching the disks. (yet)0
-
Thank you for the reply Mijzelf.
Before i proceed with your idea allow me one question. I may have a configuration backup file (.rom). Could i restore the volume by simply loading this file into the NAS? Can this hurt anything?
0 -
Putting back a configuration backup won't hurt (except that you loose configuration changes since then), but I can't imagine that it will touch the disks, and so it won't solve your problem.
0 -
You are right, just tried that on DEV2 and as you said it doesn't touch the volumes or Disk Groups
Btw, i just noticed that there is a difference between Volume and Disk Group (as long as the "Disk Group" is not deleted the "Create Volume" options are fixed to "Existing Disk Group"). However i seem to have deleted the whole Disk Group on DEV1 (for clarity).
So coming to your request of the SSH readout.
===================================~ # mdadm --examine /dev/sd[ab]3/dev/sda3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxxName : NAS326:2 (local to host NAS326)Creation Time : Mon Oct 31 17:33:53 2022Raid Level : linearRaid Devices : 2Avail Dev Size : 27336763376 (13035.18 GiB 13996.42 GB)Used Dev Size : 0Data Offset : 16 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxxUpdate Time : Mon Oct 31 17:33:53 2022Checksum : 4b3cd45c - correctEvents : 0Rounding : 64KDevice Role : Active device 0Array State : AA ('A' == active, '.' == missing)/dev/sdb3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxxName : NAS326:2 (local to host NAS326)Creation Time : Mon Oct 31 17:33:53 2022Raid Level : linearRaid Devices : 2Avail Dev Size : 27336763376 (13035.18 GiB 13996.42 GB)Used Dev Size : 0Data Offset : 16 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxxUpdate Time : Mon Oct 31 17:33:53 2022Checksum : 35d98b89 - correctEvents : 0Rounding : 64KDevice Role : Active device 1Array State : AA ('A' == active, '.' == missing)
===================================
Bold entries change with every "Disk Group" creation. Volume Creation keeps all entries the same (but data is not present).0 -
OK. The interesting parts here are:
Version : 1.2
Feature Map : 0x0
Raid Level : linear
Data Offset : 16 sectors
Super Offset : 8 sectors
Rounding : 64K
sda3: Device Role : Active device 0
sdb3: Device Role : Active device 1
So you can create the array on DEV1 with
su
mdadm --create --assume-clean --level=linear --raid-devices=2 --data-offset=16 --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3(The line starting with mdadm is a single line). After that, the command 'mdadm --examine /dev/sd[ab]3' should show a similar output as on DEV2. Reboot the box, and with some luck your volume is back. But beware, as far as the NAS knows, it's a new volume, and so your shares aren't accesible yet. You have to enable them in the Shares menu./Edit: mdadm is a bit picky on the exact sequence of the arguments, and I am not sure this sequence is right. But it will tell you which argument is on the wrong place. You can safely shuffle the arguments, as long as /dev/md2 and following stay at the end.
0 -
Thanks for the instructions. So as a first test i started the whole process on the DEV2 as a sandbox (no offense).
I did the following- i created a JBOD (max drive space, all default settings)
- got SSH "mdadm --examine /dev/sd[ab]3" info [pre]
- Copied some test data on the drive
- deleted the whole "Disk Group"
- Rebooted NAS
- checked "mdadm --examine /dev/sd[ab]3" again just to be shure
- Created new volume with mdadm (details see below)
- got SSH "mdadm --examine /dev/sd[ab]3" info [post]
- compared [pre] and [post]
- looked into Webinterface of NAS (see below)
- searched for the data (unfortunately no data was shown)
Some noteworthy things:
To 6)mdadm --examine /dev/sd[ab]3mdadm: No md superblock detected on /dev/sda3.mdadm: No md superblock detected on /dev/sdb3.
To 7)
Had to use a modified version
mdadm --create --assume-clean --level=linear --raid-devices=2 --rounding=64K --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3
A) because the option "--data-offset" wasn't known (found some interenet forum where they stated that you have to have mdadm >= 3.3 to have this option; my NAS uses "mdadm - v3.2.6 - 25th October 2012")
But a comparison of "Super Offset" and "Data Offset" [pre] [post] showed that the values are identical
B ) Rounding turned out to be false. Without the "--rounding=64K" option round was set to 0K.
FYI i started completely fresh (create "Disk Group" and Volume in webIF) once i saw that.
To 9)
a diff of [pre] vs [post] showed they are identical with the exception of
- Array UUIDs
- Creation Times
- Device UUIDs
- Checksums
To 10)
0 -
Thanks for the instructions. I used DEV2 as a Sandbox to test everything (no offense).I did the following1) Create JBOD2) [pre]: Examine with mdadm via SSH "mdadm --examine /dev/sd[ab]3"3) Copy test data on NAS4) Deleted "Disk Group" in Web Interface5) Rebooted NAS6) checked "mdadm --examine /dev/sd[ab]3"7) Created new volume with mdadm (see details below)8) [post]: Examine with mdadm via SSH "mdadm --examine /dev/sd[ab]3"9) diff compare on [pre] and [post]10) Checked Web Interface of NAS11) Tried to find data via web interface==> Unfortunately it did not work. Any further instructions greatly appreciated.Some more details on the stepsTo 7)Had to use a modified versionmdadm --create --assume-clean --level=linear --raid-devices=2 --rounding=64K --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3because A) and B )A) Option "--data-offset" was not known to the mdadm version in use (mdadm - v3.2.6 - 25th October 2012). Found some interen forum entry stating that you need to have =>3.3 to have this option. By trial i found i dont need it because the default data offset seems to be correct. ==> Just skipped this optionB ) On my first try i found that the Rounding was wrong. Default it came to be 0K. Found the option "--rounding=64K" and added it.To 8)[pre] and [post] are identical with the following exceptionsArray UUIDsCreation TimesDevice UUIDsUpdate TimesChecksumsAre the checksums not a problem? Doesn't that mean that somethings is not fitting?To 9)
To11)
No data in the "File Browser" of the web interface0 -
Wow! Good catch about the rounding. I overlooked that (it's only used on a linear array, and I don't have much experience with that.Are the checksums not a problem? Doesn't that mean that somethings is not fitting?No. It's the checksum on the header itself, and as UUID's are different, the checksums are different.No data in the "File Browser" of the web interface
I don't see it in your listing, did you reboot after creating the array? The internal logical volumes aren't auto detected, I think, nor are the internal filesystems mounted. In case you are interested, those logical volumes can be administrated with vgscan & friends, and mounting and checking the volume can be done with
mkdir /tmp/mountpoint
mount /dev/<device> /tmp/mountpoint
ls /tmp/mountpointwhere <device> is some vg_<something> , which is shown by vgscan, and/or bycat /proc/partitionsOn a system without volume group device is in this case md2.BTW, I don't think DEV1 has a volume group inside it's raid array, unless you specifically configured that. On DEV2 that is the default, because of the 16GiB+ size of the array.0 -
Mijzelf said:I don't see it in your listing, did you reboot after creating the array?
Ok, we'll see once i try it.Mijzelf said:BTW, I don't think DEV1 has a volume group inside it's raid array, unless you specifically configured that. On DEV2 that is the default, because of the 16GiB+ size of the array.
As for the other instructions: I'm not sure i understood correctly... Here's what i did~ # cat /proc/partitionsmajor minor #blocks name7 0 144384 loop031 0 2048 mtdblock031 1 2048 mtdblock131 2 10240 mtdblock231 3 15360 mtdblock331 4 108544 mtdblock431 5 15360 mtdblock531 6 108544 mtdblock68 0 13672382464 sda8 1 1998848 sda18 2 1999872 sda28 3 13668381696 sda38 16 13672382464 sdb8 17 1998848 sdb18 18 1999872 sdb28 19 13668381696 sdb39 0 1997760 md09 1 1998784 md19 2 27336763264 md2~ # vgscanReading all physical volumes. This may take a while...~ #~ # mkdir /tmp/mountpoint~ # mount /dev/md2 /tmp/mountpointmount: /dev/md2 is write-protected, mounting read-onlymount: wrong fs type, bad option, bad superblock on /dev/md2,missing codepage or helper program, or other errorIn some cases useful info is found in syslog - trydmesg | tail or so.Maybe the error is due to cgscan was still running (tried to ctrl+c but i don't know if it works). Sorry my Linux knowledge is very limited. Or do you see something else?
0 -
Sorry my Linux knowledge is very limited.In this case it's about block devices. A block device is a random accessible device where you can read or write one or more blocks at a time. 'cat /proc/partitions' shows all block devices known to the kernel. (The whole /proc/ directory is not a 'real' directory, but a direct peek into kernel structures). In your /proc/partitions you see mtdblockn devices, which are flash partitions. Further you see sdX, which are sata disks (or other devices which behave like sata disks, like usb sticks). sdXn is partition n on disk sdX. And device mdn is 'multi disk' device n, which in most cases is a raid array.
A block device can contain a filesystem, in which case it's mountable (provided you have the filesystem driver). Or it can contain a partition table, in which case it is not mountable, but it might have partitions. Or it has a raid header, in which case mdadm can create an mdn device from it, provided enough members are available. Or it can contain a volume group header, in which case vgscan can create one or more vg_<something> devices.
So in your case you should assemble the array (if it isn't done already) so /proc/partitions contains the right mdn device. Then execute vgscan, to examine all known block devices for a volume group header, and then examine /proc/partitions again to see if it added any vg_<something> devices, which might be mounted with 'mount /dev/vg_<something> /tmp/mountpoint'.
I can imagine this looks a bit confusing. But in Linux a blockdevice is just a blockdevice, and Linux doesn't make any assumptions. If you want to put a partition on a blockdevice, and a raid array on that partition, and a volume group on that raid array, and a partition table on a logical volume in that volume group, you are free to do so. But you'll have to create the non-primary block devices in the right order.Having said that, I would expect your call to vgscan to output the found volume groups (and their logical volumes) but it is silent. So either my expectations are wrong (perfectly possible, in Linux it's quite common that a command doesn't give an output, unless something is wrong) or the volume group doesn't exist anymore. The could be part of the volume deletion, not only zero out the raid headers, but also zero out the volume group headers. Unless deleting a volume takes hours, it cannot zero out the filesystem itself.So the big question is, did DEV1 use a volume group? If you hesitate to create a raid array on that device, you could also build one. Which means you don't use --create, but --build. In that case it assembles the raid array, without writing the raid headers. And you can simply test if the assembled array is mountable. If it isn't, than either the data offset of the array is wrong, or the raid array doesn't contain a filesystem, but a (deleted?) volume group.0
Categories
- All Categories
- 415 Beta Program
- 2.3K Nebula
- 142 Nebula Ideas
- 94 Nebula Status and Incidents
- 5.6K Security
- 230 USG FLEX H Series
- 267 Security Ideas
- 1.4K Switch
- 71 Switch Ideas
- 1K Wireless
- 39 Wireless Ideas
- 6.3K Consumer Product
- 246 Service & License
- 385 News and Release
- 82 Security Advisories
- 28 Education Center
- 9 [Campaign] Zyxel Network Detective
- 3.1K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 83 About Community
- 71 Security Highlight