NAS326: Restore Deleted JBOD Volume

2

All Replies

  • SpamMaster50000
    SpamMaster50000 Posts: 15  Freshman Member
    First Comment
    Thank you so much for your patience!

    I am a little confused about the terminology "Disk Group". From vgscan I learned a bit about LVM, which operates with Physical Volumes (PV), Volume Groups (VG) and Logical Volumes (LV)... I think i understand that concept (sorry so far i was just a user and never bothered to dig into this).

    There are no "Disk Groups"... Is Zyxel using the name "Disk Group" for a "Volume Group"?

    I understand that your suggestuion has the following steps
    1) "So in your case you should assemble the array": How exactly?
    2) "Then execute vgscan, to examine all known block devices for a volume group header, and then examine /proc/partitions again to see if it added any vg_<something> devices, which might be mounted with 'mount /dev/vg_<something> /tmp/mountpoint'.": I thought "vgscan" and "cat /proc/partitions" are commands to examine. Why does vgscan add devices? What if vgscan does not find a volume group?


    "I would expect your call to vgscan to output the found volume groups": I don't think it's silent. It just didn't find anything.


    "volume group doesn't exist anymore. The could be part of the volume deletion, not only zero out the raid headers, but also zero out the volume group headers. Unless deleting a volume takes hours, it cannot zero out the filesystem itself.": Let's assume this is the case. What to do now?

    "So the big question is, did DEV1 use a volume group?": I'm 60% sure it had one (i know that is not much). You wrote earlier 
    "BTW, I don't think DEV1 has a volume group inside it's raid array, unless you specifically configured that. On DEV2 that is the default, because of the 16GiB+ size of the array.": You mean 16TB+, right? So if i use DEV2 and set um a 12TB JBOD you think i could find out?

    "If you hesitate to create a raid array on that device, you could also build one. Which means you don't use --create, but --build. In that case it assembles the raid array, without writing the raid headers.": Sounds reasonable. Still don't understand where to start and all the possibilities (sorry, sticking to the old dont do anything until sure recovery rule).

    "And you can simply test if the assembled array is mountable.": ok


    "If it isn't, than either the data offset of the array is wrong, or the raid array doesn't contain a filesystem, but a (deleted?) volume group.": In which case i would do what? 

    Do you think i better off dismounting the drives from the NAS and hook them up to a PC and some data recovery software?

    In the mean time i'll run through the Sandbox process on DEV2 again and make sure i get the vgscan and cat /proc/partitions info of the healthy system. I would really like to have a successful dry run before i go to DEV1. Even if it's extra work for me. Hope you understand.
  • SpamMaster50000
    SpamMaster50000 Posts: 15  Freshman Member
    First Comment
    New Sandbox on DEV2 with ~13 TB JBOD Volume

    ~ # cat /proc/partitions

    major minor  #blocks  name

     

       7        0     144384 loop0

      31        0       2048 mtdblock0

      31        1       2048 mtdblock1

      31        2      10240 mtdblock2

      31        3      15360 mtdblock3

      31        4     108544 mtdblock4

      31        5      15360 mtdblock5

      31        6     108544 mtdblock6

       8        0 13672382464 sda

       8        1    1998848 sda1

       8        2    1999872 sda2

       8        3 13668381696 sda3

       8       16 13672382464 sdb

       8       17    1998848 sdb1

       8       18    1999872 sdb2

       8       19 13668381696 sdb3

       9        0    1997760 md0

       9        1    1998784 md1

       9        2 27336763264 md2

     253        0     102400 dm-0

     253        1 14033092608 dm-1



    ~ # vgscan

      Reading all physical volumes.  This may take a while...

      Found volume group "vg_354b8b32" using metadata type lvm2


    FYI: The Volume and Disk Group are still intact until now.
  • Mijzelf
    Mijzelf Posts: 2,758  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    There are no "Disk Groups"... Is Zyxel using the name "Disk Group" for a "Volume Group"?

    Sort of, I think.

    I thought "vgscan" and "cat /proc/partitions" are commands to examine. Why does vgscan add devices? What if vgscan does not find a volume group?
    Vgscan not only searches for volume groups en logical volumes herein, it also adds them to the kernel's table of block devices.
    Do you think i better off dismounting the drives from the NAS and hook them up to a PC and some data recovery software?
    Define 'better'. There a basically two kinds of data recovery software. Software which doesn't use (or need) the filesystem, but instead searches the raw disk interface for recognizable files. Like PhotoRec. The other type tries to repair the filesystem, and so needs to 'know' that filesystem. The problem here is that the filesystem is possible inside a volume group, and that is inside a raid container. To be able to access the filesystem the software needs to 'know' at least about the raidcontainer. (The volume group just adds another offset).
    On a (non-Linux) PC the first kind can work. But it's downside is that it can't find all types of files (only files with a recognizable header), it doesn't give metadata (filename, path, timestamp), it can't handle fragmented files, and it also finds deleted files.
    For the second kind I think you'll at least have to tell it somehow about the raid array. (On a Linux box you can do that by building it).
    New Sandbox on DEV2 with ~13 TB JBOD Volume
       9        2 27336763264 md2
     253        0     102400 dm-0
     253        1 14033092608 dm-1
    Sandbox doesn't work in this case. md2 is the raid volume spanning both disks, and is around 27TB. Then you specified a volume of 13TB. The only way the firmware has to fit that inside md2 is by using a volume group. I *think* dm-0 is the volume group header, and dm-1 is the 13TB volume.
    Found volume group "vg_354b8b32" using metadata type lvm2
    So it's definitely not silent. and so deleting the volume (using firmware) also deletes the volume group. (And no, I don't know why the devices are dm-0 and dm-1, and not vg_<something>)

    I'll try to show the topology with some 'graphs': Your two disks:
    disk1:ABBBCCCDDDDDD
    disk2:EFFFGGGHHHHHH
    A is the partition table on disk 1, E on disk 2. B is the first partition on disk 1, C the second, both used by firmware. D is the data partition, and H is the data partition on disk 2.
    The two data partitions:
    D:RTTTTT
    H:SUUUU
    R and S are the raid headers. When assembled the array shows as one block:
    md2:TTTTTUUUUU
    (BTW, in case of raid0 that would have been TUTUTUTUTU.)
    When using a volume group that is divided in
    VG:XVVVVVVVWW
    where X is the header, V is volume 1, and W is volume two.
    Without volume group the array 'simply' just contains a filesystem. With volume groups the blocks V and W contain the filesystems. (And on DEV1 W is non-existent.)
    So to put it all together, in case of a volume group, your filesystem starts on the 3th 'block' of the 3th partition partition of disk 1 (1st block is raid header, 2nd block is VG header). It spans the rest of the partition, and continues on the 2nd block of the 3th partition of disk 2. (This partition doesn't contain a VG header, because the VG is logically on the raid array, not on the partitions.)
    I call it all 'blocks', but of course the size of the raid headers don't have to be the same as the size of the VG header). And to complicate the matter, a rounding of 64KiB is used, which means that if TTTTT is not a multiple of 64KiB, the last fragment is not used, creating a 'hole' in the TTTTTUUUUU array.
    That is complex matter for rescue software, when the raid and VG headers are zeroed out.

    When you build the raid array on DEV1, and the array is not mountable, then we have to assume that a VG is used. On your DEV2 you can see the VG header is 102400KiB, which is 100MiB. Using a loop device it is possible to define a new blockdevice on an offset of 100MiB of the raid array. And then can be looked if that is mountable. If that is mountable, you can at least use that to copy away your data. I suppose it's also possible to re-create the VG without destroying the filesystem inside, but I have no experience with that.

    The command to create that offset block device is:
    losetup /dev/loop1 -o 104857600 /dev/md2
    assuming /dev/md2 is the raid array. After this /dev/loop1 might be mountable. (When /dev/loop1 is in use (check /proc/partitions) just use /dev/loop2, or ..3)

    When the offset is not 100MiB, some extra software is needed to find the filesystem on an offset of md2. Testdisk should do that, and I think the Marvell 88F628x might run on your box.

  • SpamMaster50000
    SpamMaster50000 Posts: 15  Freshman Member
    First Comment
    what do you think of getting an image of the drives initial data (maybe a gigabyte or so)?

    I'm experimenting with:

    dd if=/dev/sda bs=1M count=1000 | ssh admin@192.168.0.XX dd of=cloneofsda.img

    Another benefit would be a file that i can have a look at. Of course the true purpose is a fallback position in case headers are overwritten.

    Didnt have much time today. Try your other suggestions tomorrow
  • Mijzelf
    Mijzelf Posts: 2,758  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    what do you think of getting an image of the drives initial data (maybe a gigabyte or so)?
    It won't hurt. And in theory you can use the image to study the headers in a decent hex editor. Don't know if it's enough to find the start of the filesystem. Else you can analyse the dump with Testdisk, to see if it can find the offset.
    dd if=/dev/sda bs=1M count=1000 | ssh admin@192.168.0.XX dd of=cloneofsda.img

    As input file you'd better use /dev/sda3. The first 4 gigabytes of /dev/sda are used for two firmware partitions, which aren't worth to backup.

  • SpamMaster50000
    SpamMaster50000 Posts: 15  Freshman Member
    First Comment
    Hi, this little "sidequest" was supposed to go fast. Instead i have huge problems getting a file...
    Tried three methods now...
    1) [remote] with putty on windows
    2) [remote] on VM Ubuntu
    3) [local] on VM Ubuntu

    where [remote] means i run dd from the remote station - i.e. NAS.
    [local] means i run dd from the local station - i.e. Ubuntu VM

    Can you help? It's probably a no effort for you, but i can't manage to get a file

    (your comment on the sda3 is noted. Will do other source once it works)

    So to 1)
    dd if=/dev/sda bs=1M count=100 | ssh admin@192.168.0.XX "dd bs=1M of=C:\tmp\SECsda.img"

    admin@192.168.0.XX's password:
    100+0 records in
    100+0 records out
    104857600 bytes (100.0MB) copied, 9.689058 seconds, 10.3MB/s
    0+6299 records in
    0+6299 records out
    104857600 bytes (100.0MB) copied, 6.980433 seconds, 14.3MB/s

    It seems to download the file, but i don't have it on the drive (also searched for it)

    To 2)
    dd if=/dev/sda bs=1M count=100 | ssh admin@192.168.0.XX "dd bs=1M of=/home/admin/SECsda.img"

    gives me
    dd: can't open '/home/admin/SECsda.img': No such file or directory

    And yes i have created a user named admin and i am logged in as admin.
    Other destinations lead to the same result. No file.

    To 3)
    ssh -oHostKeyAlgorithms=+ssh-dss admin@192.168.0.XX "dd if=/dev/sda bs=1M count=100" | dd of=image.gz

    in principle writes a file but dd can't be executed and inserting sudo did not work
    dd: can't open '/dev/sda': Permission denied
    0+0 records in
    0+0 records out
    0 bytes copied, 2,99872 s, 0,0 kB/s

    Do you know what i'm doing wrong?
  • Mijzelf
    Mijzelf Posts: 2,758  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    1) I have no experience with an ssh shell on windows. But the of path looks strange to me. Normally (on Linux) a backslash is an escape. On Linux 'touch C:\tmp\SECsda.img' generates a file C:tmpSECsda.img in the current directory. Don't know what that gives. You can try to use forward slashes. And is the shell plain Windows, or is it WSL? In the latter case, is C: available?
    2) There is a user admin, but is there also a directory /home/admin/? And is it writable for admin.
    3) admin doesn't have the right to read /dev/sda. So you should login as root. Sudo is not available on the box.

    On 1) and 2): Debugging is easier by testing on the target locally. So for 1) login over ssh on windows, and execute 'dd if=<somefile> | dd of=C:\tmp\SECsda.img', and then 'ls C:\tmp\SECsda.img' or something like that.






  • SpamMaster50000
    SpamMaster50000 Posts: 15  Freshman Member
    First Comment

    Ok, It's been a while but i managed to get some things done and wanted to sync the next steps with you.

     

    First: I managed to get those clones using ssh. Tried it with DEV1 and saw what you described (headers were deleted when i deleted the volume group in the NAS GUI). Then i made copies (5GB) of sda, sda3, sdb and sdb3 of my DEV1 (the real problematic unit). Looked at that data for a while. As you described there is almost 4 gig of reserved space before my data starts.

     

    Anyway, i feel confident enough to go on now. As the diskussions we had took so many turns and twists i just wanted list what i think i should do based on your guidance. Please confirm the course of action.

     

    On DEV1

     

    ~ # cat /proc/partitions

    major minor  #blocks  name

     

       7        0     144384 loop0

      31        0       2048 mtdblock0

      31        1       2048 mtdblock1

      31        2      10240 mtdblock2

      31        3      15360 mtdblock3

      31        4     108544 mtdblock4

      31        5      15360 mtdblock5

      31        6     108544 mtdblock6

       8        0 7814026584 sda

       8        1    1998848 sda1

       8        2    1999872 sda2

       8        3 7810026496 sda3

       8       16 7814026584 sdb

       8       17    1998848 sdb1

       8       18    1999872 sdb2

       8       19 7810026496 sdb3

       9        0    1997760 md0

       9        1    1998784 md1

     

    So everything looks like a straight forward build of sda3 and sdb3

     

    1)

    mdadm --examine /dev/sd[ab]3

    % pre build, just to have some info

     

    2)

    mdadm --build --assume-clean --level=linear --raid-devices=2 --rounding=64K --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3

    % builds the JBOD as md2; Nothing is written on the drives, right?

     

    3)

    mdadm --examine /dev/sd[ab]3

    % post build; should now show md2

     

    4)

    reboot? (initially you proposed to "mdadm --create" and reboot.) I think in this case i should not reboot, right?

     

    5)

    vgscan

    and/or

    cat /proc/partitions

     

    % should show new volume group, right?

    % What if it does not show anything(as happened on DEV2)?

     

     

    6)

    mkdir /tmp/mountpoint
    mount /dev/<device> /tmp/mountpoint
    ls /tmp/mountpoint

     

    where <device> is some vg_<something>

    % is that correct?

     

    7)

    how can i see that the data should be there now?

    can i access the data via ssh?



  • Mijzelf
    Mijzelf Posts: 2,758  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    2) Right. mdadm --build is not supposed to write the headers.
    3) Wrong. mdadm --examine /dev/sda[ab]3 shows the headers, which are not written. Instead you could 'mdadm --examine /dev/md2' or 'mdadm --detail /dev/md2' to examine the array itself.
    4) Do not reboot. The headers are not written, and a reboot will just reset the situation.
    5) If there is a volume group on /dev/md2, it should show up now. If it doesn't, just try to mount /dev/md2:
    mkdir /tmp/mountpoint
    mount /dev/md2 /tmp/mountpoint
    If that succeeds, you can see the data in /tmp/mountpoint:
    ls -l /tmp/mountpoint/
    You can also use WinSCP (which is closely related to ssh) to walk through the tree and copy your data, if desired.
    If the mount fails, try to create a loop device on /dev/md2 as I wrote earlier:
    losetup /dev/loop1 -o 104857600 /dev/md2
    and try to mount /dev/loop1. If that succeeds, there was a logical volume at that offset. It should certainly be possible to re-create that without touching the filesystem, but that is beyond my experience. If that also fails, you can use testdisk in an ssh shell to search for the filesystem on /dev/md2. I can't exactly tell you how, as testdisk is an interactive program with a 'gui'. But you should tell it to search for partitions or filesystems on /dev/md2, and hopefully it will find an ext4 filesystem at some offset. Testdisk should not write the partition table, but you can use that offset in losetup. Testdisk gives the offset in sectors of 512 bytes. losetup wants the offset in bytes.


  • SpamMaster50000
    SpamMaster50000 Posts: 15  Freshman Member
    First Comment
    Ok

    2)

    ~ # mdadm --build --assume-clean --level=linear --raid-devices=2 --rounding=64K --metadata=1.2 /dev/md2 /dev/sda3 /dev/sdb3

    mdadm:option --metadata not valid in build mode

    so i used

    ~ # mdadm --build --assume-clean --level=linear --raid-devices=2 --rounding=64K /dev/md2 /dev/sda3 /dev/sdb3

    mdadm: array /dev/md2 built and started.


    3)

    ~ # mdadm --examine /dev/md2

    mdadm: No md superblock detected on /dev/md2.

     

     

    ~ # mdadm --detail /dev/md2

    /dev/md2:

            Version :

      Creation Time : Sun Nov 13 15:47:21 2022

         Raid Level : linear

         Array Size : 15620052992 (14896.44 GiB 15994.93 GB)

       Raid Devices : 2

      Total Devices : 2

     

              State : clean

     Active Devices : 2

    Working Devices : 2

     Failed Devices : 0

      Spare Devices : 0

     

           Rounding : 64K

     

        Number   Major   Minor   RaidDevice State

           0       8        3        0      active sync   /dev/sda3

           1       8       19        1      active sync   /dev/sdb3

    4) NO REBOOT!

    5)

    ~ # vgscan

      Reading all physical volumes.  This may take a while…

     % noting showed... so i continued trying to mount

     

    ~ # mkdir /tmp/mountpoint

    ~ # mount /dev/md2 /tmp/mountpoint

    mount: /dev/md2 is write-protected, mounting read-only

    mount: wrong fs type, bad option, bad superblock on /dev/md2,

           missing codepage or helper program, or other error

     

           In some cases useful info is found in syslog - try

           dmesg | tail or so.


    No data visible (by SCP or ls -l /tmp/mountpoint/)


    Should i reboot here and start fresh?
    Here is what i tried

    ~ # losetup /dev/loop1 -o 104857600 /dev/md2
    ~ # cat /proc/partitions
    major minor  #blocks  name

       7        0     144384 loop0
       7        1 15619950592 loop1
      31        0       2048 mtdblock0
      31        1       2048 mtdblock1
      31        2      10240 mtdblock2
      31        3      15360 mtdblock3
      31        4     108544 mtdblock4
      31        5      15360 mtdblock5
      31        6     108544 mtdblock6
       8        0 7814026584 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3 7810026496 sda3
       8       16 7814026584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 7810026496 sdb3
       9        0    1997760 md0
       9        1    1998784 md1
       9        2 15620052992 md2
    ~ # mount /dev/loop1 /tmp/mountpoint
    mount: /dev/loop1 is write-protected, mounting read-only
    mount: wrong fs type, bad option, bad superblock on /dev/loop1,
           missing codepage or helper program, or other error

           In some cases useful info is found in syslog - try
           dmesg | tail or so.


    ==> Still no data in SCP

Consumer Product Help Center