NAS326: Restore Deleted JBOD Volume

SpamMaster50000
SpamMaster50000 Posts: 15
First Comment
 Freshman Member
Hi,
today i had a horrible accident.

I bought a second NAS326 (here DEV2) and installed it. The new device (DEV2) was on IP 192.168.0.XX. I struggled because i had bought two 14TB drives and the initial plan was to do a JBOD with them. During installation i learned about the VOLUME restriction of 16TB (previously i thoght it to be a drive restriction). Nevertheless i carried on and wanted to create two volumes (one for each 14TB drive). So i created the first volume on drive 1 (standard). During the setup i had to restart the DEV2. 

Somehow and unnoticed by me during the reboot my browser thought it's a nice idea to switch to the old NAS326 (here DEV1), which hat the IP 192.168.0.YY. But since the IP of the Device was hidden in the browser (just showed NAS326/...) i could not see that.

Then i looked in the volume and saw that it was "not properly" created (i know in retrospect this is where i should have stopped!!!). However i thought nothing by it (after all the DEV2 disks are blank so i just thought i'll start fresh and delete that volume. However secretely i was really deleting the DEV1 JBOD volume! At next reboot the wrong NAS beeped, and i noticed what i just had done...

So the question is now: How can i repair/restore a deleted JBOD volume?

Hardware should be fine. No erroneus sectors or etc.. I immediately stopped after the deletion and wrote this thread. PLEASE HELP!

Thanks in advance

«13

All Replies

  • Mijzelf
    Mijzelf Posts: 2,215
    100 Answers 1000 Comments Friend Collector Fifth Anniversary
     Guru Member
    I don't think that it's a very complex problem. A data volume on a ZyXEL NAS is always in a raid array. A JBOD is basically a linear array. (On a ZyXEL. On other brands JBOD can mean single disk volumes). When I would have written the firmware, deleting a volume would mean zeroing out the raid headers. I didn't write it (duh!), but I have no reason to think ZyXEL has done it different. So basically you have to restore the raid headers. A problem might be that I don't know what a JBOD header looks like. When you didn't do anything with DEV2 yet, it might be an idea to re-create the JBOD array, and look at the headers: enable the ssh server, login over ssh, and execute
    su
    mdadm --examine /dev/sd[ab]3
    If the disks in DEV2 aren't available anymore, we can build an array without creating it, to see if the resulting volume is mountable. This way several settings can be tried without touching the disks. (yet)

  • SpamMaster50000
    SpamMaster50000 Posts: 15
    First Comment
     Freshman Member
    edited October 2022
    Thank you for the reply Mijzelf.

    Before i proceed with your idea allow me one question. I may have a configuration backup file (.rom). Could i restore the volume by simply loading this file into the NAS? Can this hurt anything?


  • Mijzelf
    Mijzelf Posts: 2,215
    100 Answers 1000 Comments Friend Collector Fifth Anniversary
     Guru Member
    Putting back a configuration backup won't hurt (except that you loose configuration changes since then), but I can't imagine that it will touch the disks, and so it won't solve your problem.
  • SpamMaster50000
    SpamMaster50000 Posts: 15
    First Comment
     Freshman Member
    You are right, just tried that on DEV2 and as you said it doesn't touch the volumes or Disk Groups

    Btw, i just noticed that there is a difference between Volume and Disk Group (as long as the "Disk Group" is not deleted the "Create Volume" options are fixed to "Existing Disk Group"). However i seem to have deleted the whole Disk Group on DEV1 (for clarity).

    So coming to your request of the SSH readout.
    ===================================
    ~ # mdadm --examine /dev/sd[ab]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
               Name : NAS326:2  (local to host NAS326)
      Creation Time : Mon Oct 31 17:33:53 2022
         Raid Level : linear
       Raid Devices : 2

     Avail Dev Size : 27336763376 (13035.18 GiB 13996.42 GB)
      Used Dev Size : 0
        Data Offset : 16 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx

        Update Time : Mon Oct 31 17:33:53 2022
           Checksum : 4b3cd45c - correct
             Events : 0

           Rounding : 64K

       Device Role : Active device 0
       Array State : AA ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
               Name : NAS326:2  (local to host NAS326)
      Creation Time : Mon Oct 31 17:33:53 2022
         Raid Level : linear
       Raid Devices : 2

     Avail Dev Size : 27336763376 (13035.18 GiB 13996.42 GB)
      Used Dev Size : 0
        Data Offset : 16 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx

        Update Time : Mon Oct 31 17:33:53 2022
           Checksum : 35d98b89 - correct
             Events : 0

           Rounding : 64K

       Device Role : Active device 1
       Array State : AA ('A' == active, '.' == missing)
    ===================================

    Bold entries change with every "Disk Group" creation. Volume Creation keeps all entries the same (but data is not present).

  • Mijzelf
    Mijzelf Posts: 2,215
    100 Answers 1000 Comments Friend Collector Fifth Anniversary
     Guru Member
    edited November 2022
    OK. The interesting parts here are:
    Version : 1.2
    Feature Map : 0x0
    Raid Level : linear
    Data Offset : 16 sectors
    Super Offset : 8 sectors
    Rounding : 64K
    sda3: Device Role : Active device 0
    sdb3: Device Role : Active device 1

    So you can create the array on DEV1 with
    su
    mdadm --create --assume-clean --level=linear --raid-devices=2 --data-offset=16 --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3

    (The line starting with mdadm is a single line). After that, the command 'mdadm --examine /dev/sd[ab]3' should show a similar output as on DEV2. Reboot the box, and with some luck your volume is back. But beware, as far as the NAS knows, it's a new volume, and so your shares aren't accesible yet. You have to enable them in the Shares menu.
    /Edit: mdadm is a bit picky on the exact sequence of the arguments, and I am not sure this sequence is right. But it will tell you which argument is on the wrong place. You can safely shuffle the arguments, as long as /dev/md2 and following stay at the end.

  • SpamMaster50000
    SpamMaster50000 Posts: 15
    First Comment
     Freshman Member
    Thanks for the instructions. So as a first test i started the whole process on the DEV2 as a sandbox (no offense).

    I did the following
    1. i created a JBOD (max drive space, all default settings)
    2. got SSH "mdadm --examine /dev/sd[ab]3" info [pre]
    3. Copied some test data on the drive
    4. deleted the whole "Disk Group"
    5. Rebooted NAS
    6. checked "mdadm --examine /dev/sd[ab]3" again just to be shure
    7. Created new volume with mdadm (details see below)
    8. got SSH "mdadm --examine /dev/sd[ab]3" info [post]
    9. compared [pre] and [post]
    10. looked into Webinterface of NAS (see below)
    11. searched for the data (unfortunately no data was shown)
    So the bottom line is, somehow it did not work... Any further suggestions are very welcome

    Some noteworthy things:

    To 6)
    mdadm --examine /dev/sd[ab]3
    mdadm: No md superblock detected on /dev/sda3.
    mdadm: No md superblock detected on /dev/sdb3.

    To 7)
    Had to use a modified version 
    mdadm --create --assume-clean --level=linear --raid-devices=2 --rounding=64K --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3

    A) because the option "--data-offset" wasn't known (found some interenet forum where they stated that you have to have mdadm >= 3.3 to have this option; my NAS uses "mdadm - v3.2.6 - 25th October 2012")

    But a comparison of "Super Offset" and "Data Offset" [pre] [post] showed that the values are identical
    B ) Rounding turned out to be false. Without the "--rounding=64K" option round was set to 0K.
    FYI i started completely fresh (create "Disk Group" and Volume in webIF) once i saw that.

    To 9)
    a diff of [pre] vs [post] showed they are identical with the exception of
    - Array UUIDs
    - Creation Times
    - Device UUIDs
    - Checksums

    To 10)


  • SpamMaster50000
    SpamMaster50000 Posts: 15
    First Comment
     Freshman Member
    Thanks for the instructions. I used DEV2 as a Sandbox to test everything (no offense).

    I did the following

    1) Create JBOD
    2) [pre]: Examine with mdadm via SSH "mdadm --examine /dev/sd[ab]3" 
    3) Copy test data on NAS
    4) Deleted "Disk Group" in Web Interface
    5) Rebooted NAS
    6) checked "mdadm --examine /dev/sd[ab]3" 
    7) Created new volume with mdadm (see details below)
    8) [post]: Examine with mdadm via SSH "mdadm --examine /dev/sd[ab]3" 
    9) diff compare on [pre] and [post]
    10) Checked Web Interface of NAS
    11) Tried to find data via web interface

    ==> Unfortunately it did not work. Any further instructions greatly appreciated.


    Some more details on the steps

    To 7)
    Had to use a modified version 

    mdadm --create --assume-clean --level=linear --raid-devices=2 --rounding=64K --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3

    because A) and B )

    A) Option "--data-offset" was not known to the mdadm version in use (mdadm - v3.2.6 - 25th October 2012). Found some interen forum entry stating that you need to have =>3.3 to have this option. By trial i found i dont need it because the default data offset seems to be correct. ==> Just skipped this option

    B ) On my first try i found that the Rounding was wrong. Default it came to be 0K. Found the option "--rounding=64K" and added it.

    To 8)
    [pre] and [post] are identical with the following exceptions
    Array UUIDs
    Creation Times
    Device UUIDs
    Update Times
    Checksums

    Are the checksums not a problem? Doesn't that mean that somethings is not fitting?

    To 9)


    To11) 
    No data in the "File Browser" of the web interface
  • Mijzelf
    Mijzelf Posts: 2,215
    100 Answers 1000 Comments Friend Collector Fifth Anniversary
     Guru Member
    Wow! Good catch about the rounding. I overlooked that (it's only used on a linear array, and I don't have much experience with that.
    Are the checksums not a problem? Doesn't that mean that somethings is not fitting?
    No. It's the checksum on the header itself, and as UUID's are different, the checksums are different.
    No data in the "File Browser" of the web interface

    I don't see it in your listing, did you reboot after creating the array? The internal logical volumes aren't auto detected, I think, nor are the internal filesystems mounted. In case you are interested, those logical volumes can be administrated with vgscan & friends, and mounting and checking the volume can be done with

    mkdir /tmp/mountpoint
    mount /dev/<device> /tmp/mountpoint
    ls /tmp/mountpoint

    where <device> is some vg_<something> , which is shown by vgscan, and/or by
    cat /proc/partitions
    On a system without volume group device is in this case md2.
    BTW, I don't think DEV1 has a volume group inside it's raid array, unless you specifically configured that. On DEV2 that is the default, because of the 16GiB+ size of the array.
  • SpamMaster50000
    SpamMaster50000 Posts: 15
    First Comment
     Freshman Member
    Mijzelf said:
    I don't see it in your listing, did you reboot after creating the array? 
    Yes, i think i did, because it would be the use case.


    Mijzelf said:
    BTW, I don't think DEV1 has a volume group inside it's raid array, unless you specifically configured that. On DEV2 that is the default, because of the 16GiB+ size of the array.
    Ok, we'll see once i try it.

    As for the other instructions: I'm not sure i understood correctly... Here's what i did

    ~ # cat /proc/partitions
    major minor  #blocks  name

       7        0     144384 loop0
      31        0       2048 mtdblock0
      31        1       2048 mtdblock1
      31        2      10240 mtdblock2
      31        3      15360 mtdblock3
      31        4     108544 mtdblock4
      31        5      15360 mtdblock5
      31        6     108544 mtdblock6
       8        0 13672382464 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3 13668381696 sda3
       8       16 13672382464 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 13668381696 sdb3
       9        0    1997760 md0
       9        1    1998784 md1
       9        2 27336763264 md2
    ~ # vgscan
      Reading all physical volumes.  This may take a while...
    ~ #
    ~ # mkdir /tmp/mountpoint
    ~ # mount /dev/md2 /tmp/mountpoint
    mount: /dev/md2 is write-protected, mounting read-only
    mount: wrong fs type, bad option, bad superblock on /dev/md2,
           missing codepage or helper program, or other error

           In some cases useful info is found in syslog - try
           dmesg | tail or so.




    Maybe the error is due to cgscan was still running (tried to ctrl+c but i don't know if it works). Sorry my Linux knowledge is very limited. Or do you see something else?


  • Mijzelf
    Mijzelf Posts: 2,215
    100 Answers 1000 Comments Friend Collector Fifth Anniversary
     Guru Member
    Sorry my Linux knowledge is very limited.
    In this case it's about block devices. A block device is a random accessible device where you can read or write one or more blocks at a time. 'cat /proc/partitions' shows all block devices known to the kernel. (The whole /proc/ directory is not a 'real' directory, but a direct peek into kernel structures). In your /proc/partitions you see mtdblockn devices, which are flash partitions. Further you see sdX, which are sata disks (or other devices which behave like sata disks, like usb sticks). sdXn is partition n on disk sdX. And device mdn is 'multi disk' device n, which in most cases is a raid array.

    A block device can contain a filesystem, in which case it's mountable (provided you have the filesystem driver). Or it can contain a partition table, in which case it is not mountable, but it might have partitions. Or it has a raid header, in which case mdadm can create an mdn device from it, provided enough members are available. Or it can contain a volume group header, in which case vgscan can create one or more vg_<something> devices.

    So in your case you should assemble the array (if it isn't done already) so /proc/partitions contains the right mdn device. Then execute vgscan, to examine all known block devices for a volume group header, and then examine /proc/partitions again to see if it added any vg_<something> devices, which might be mounted with 'mount /dev/vg_<something> /tmp/mountpoint'.

    I can imagine this looks a bit confusing. But in Linux a blockdevice is just a blockdevice, and Linux doesn't make any assumptions. If you want to put a partition on a blockdevice, and a raid array on that partition, and a volume group on that raid array, and a partition table on a logical volume in that volume group, you are free to do so. But you'll have to create the non-primary block devices in the right order.

    Having said that, I would expect your call to vgscan to output the found volume groups (and their logical volumes) but it is silent. So either my expectations are wrong (perfectly possible, in Linux it's quite common that a command doesn't give an output, unless something is wrong) or the volume group doesn't exist anymore. The could be part of the volume deletion, not only zero out the raid headers, but also zero out the volume group headers. Unless deleting a volume takes hours, it cannot zero out the filesystem itself.
    So the big question is, did DEV1 use a volume group? If you hesitate to create a raid array on that device, you could also build one. Which means you don't use --create, but --build. In that case it assembles the raid array, without writing the raid headers. And you can simply test if the assembled array is mountable. If it isn't, than either the data offset of the array is wrong, or the raid array doesn't contain a filesystem, but a (deleted?) volume group.

Consumer Product Help Center