NAS540: Volume down, repairing failed, how to restore data?

Options
12467

All Replies

  • basetron
    basetron Posts: 13  Freshman Member
    Options
    BTW is it possible to stop the buzzer from the command line?
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Yes.
    buzzerc -s && mv /sbin/buzzerc /sbin/buzzerc.old
    Will stop the buzzer, and remove the possibility for the firmware to start it again. Till the next reboot.
  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Hi,
    neither

    lvscan 
    <div>~ # lvscan -a -b -v<br></div><div>&nbsp; &nbsp; Using logical volume(s) on command line.<br></div><div>&nbsp; &nbsp; Finding all volume groups.<br></div><div>&nbsp; &nbsp; No volume groups found.<br></div>

    vgscan

    <div>/dev # vgscan --mknodes -v<br></div><div>&nbsp; &nbsp; Wiping cache of LVM-capable devices<br></div><div>&nbsp; &nbsp; Wiping internal VG cache<br></div><div>&nbsp; Reading all physical volumes.&nbsp; This may take a while...<br></div><div>&nbsp; &nbsp; Using volume group(s) on command line.<br></div><div>&nbsp; &nbsp; Finding all volume groups.<br></div><div>&nbsp; &nbsp; No volume groups found.<br></div><div>&nbsp; &nbsp; Creating directory "/dev/mapper"<br></div><div>&nbsp; &nbsp; Creating device /dev/mapper/control (10, 236)<br></div><div>&nbsp; &nbsp; Using logical volume(s) on command line.<br></div><div>&nbsp; &nbsp; Finding all volume groups.<br></div><div>&nbsp; &nbsp; No volume groups found.<br></div>

    I got lost a bit so I'm pasting lvmdiskscan: sde is an additional, external usb drive - I intended to robocopy my files to that drive, but I can't access them:

    <div>~ # lvmdiskscan<br></div><div>&nbsp; /dev/loop0 [&nbsp; &nbsp; &nbsp;144.00 MiB]<br></div><div>&nbsp; /dev/sda&nbsp; &nbsp;[&nbsp; &nbsp; &nbsp; &nbsp;1.82 TiB]<br></div><div>&nbsp; /dev/md0&nbsp; &nbsp;[&nbsp; &nbsp; &nbsp; &nbsp;1.91 GiB]<br></div><div>&nbsp; /dev/md1&nbsp; &nbsp;[&nbsp; &nbsp; &nbsp; &nbsp;1.91 GiB]<br></div><div>&nbsp; /dev/md2&nbsp; &nbsp;[&nbsp; &nbsp; &nbsp; &nbsp;1.82 TiB]<br></div><div>&nbsp; /dev/md3&nbsp; &nbsp;[&nbsp; &nbsp; &nbsp; &nbsp;1.82 TiB]<br></div><div>&nbsp; /dev/sde1&nbsp; [&nbsp; &nbsp; &nbsp;128.00 MiB]<br></div><div>&nbsp; /dev/sde2&nbsp; [&nbsp; &nbsp; &nbsp; &nbsp;3.64 TiB]<br></div><div>&nbsp; 1 disk<br></div><div>&nbsp; 7 partitions<br></div><div>&nbsp; 0 LVM physical volume whole disks<br></div><div>&nbsp; 0 LVM physical volumes<br></div>


    Is there an easy way to copy my files from these drives?
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Is there an easy way to copy my files from these drives?

    When the filesystem can't be mounted, and it's even not clear where the filesystem is, the only way I can think of is low level recovery. Depending on the nature of the data a tool like PhotoRec can recover much, and it's not hard to use.

    Problem is that without help of the filesystem only files can be restored, not the metadata (filename, timestamp, pathname) as these are stored in the filesystem. So you end up with a (big?) bunch of files, having a random name, and fortunately a descriptive extension. (Although I wouldn't be surprised if a docx document is restored as zip, as it is actually a zipfile)

  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Hi I finally got back my data but the recovery process was extremely long and difficult and only works if the actions are taken in the right order:

    [before that I have taken all of the above-mentioned steps but I couldn't get it working]

    equipment:
    • Zyxel 540 NAS with 4 drives inside 2 TB (wd green) each,  drives were grouped into 2 volumes consisting of 2 drives in raid10 (each drive from a volume is mirrored on the other one)
    • 4TB usb drive (seagate)
    1. firstly I run fsck on each of the drives
    2. then I mounted 1st drive from the 1st volume from the raid array
    3. I mounted the usb drive and created ext4 partition
    4. I rsynced these 2 drives, r sync operation took me 4 days and nights (which is extremely strange and long)
    5. I unmounted the 1st drive from the 1st volume 
    6. I mounted 1st drive from the 2nd volume from the raid array
    7. I rsynced  this drive with the usb drive , and this time r sync operation took 3 days and nights (which is also too long )
    8. I unmounted the 1st drive from the 1st volume 
    9. I formated the former NAS drives via ssh and reinitialised volumes in the webface
    10. I'm currently running rsync to synchronise the data on NAS with those on the usb drive.

    This ends my problems. Thank you for your time.  
  • ksr7
    ksr7 Posts: 15  Freshman Member
    edited October 2019
    Options
    Hello there.

    I'm facing this issue.

    I got a Zyxel Nas 542 with 4x2TB hard drives in raid5 config, full working.

    I just wanted to replace one drive with a WD RED, so I got one from Amazon, replaced, and the NAS told me a new drive was found and to start to re-sync the array...aaaand it's gone.

    The raid was not working, so I following this guide a bit, and seems the archive was up again with only 3 disks, but every try to re-sync again was a failure.BTW, I was able to see files from the built-in navigator, but unable to copy it or reach from any machine ( neither Windows neither Linux )

    Now, don't know why, I'm not able to do anything, the NAS is just beeping and doesn't mount the raid...so I'm out of options :(

    I've put again all the original drives, right now I'm not sure about the order :(

    I've got some data I want  to recover, can someone help me?

    I'm posting the result of mdadm --examine, thanks for your help :)

    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : 3528044c:5d163b06:70ef5310:ad5f312d
               Name : ubuntu:metadata=1.2
      Creation Time : Fri Oct 25 12:35:07 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 3898765312 (1859.08 GiB 1996.17 GB)
         Array Size : 5848147968 (5577.23 GiB 5988.50 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 22b227c9:0c224a81:07fb8957:141e2357

    Internal Bitmap : 8 sectors from superblock
        Update Time : Fri Oct 25 13:23:26 2019
           Checksum : 3bb16900 - correct
             Events : 12

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : 3528044c:5d163b06:70ef5310:ad5f312d
               Name : ubuntu:metadata=1.2
      Creation Time : Fri Oct 25 12:35:07 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 3898765312 (1859.08 GiB 1996.17 GB)
         Array Size : 5848147968 (5577.23 GiB 5988.50 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 0fba7ad7:3f1dd2a5:8c82adfb:b6b19967

    Internal Bitmap : 8 sectors from superblock
        Update Time : Fri Oct 25 13:23:26 2019
           Checksum : 23268749 - correct
             Events : 12

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : 3528044c:5d163b06:70ef5310:ad5f312d
               Name : ubuntu:metadata=1.2
      Creation Time : Fri Oct 25 12:35:07 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 3898765312 (1859.08 GiB 1996.17 GB)
         Array Size : 5848147968 (5577.23 GiB 5988.50 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : e6bf9f92:31ce7114:90ea27b0:601e0019

    Internal Bitmap : 8 sectors from superblock
        Update Time : Fri Oct 25 13:23:26 2019
           Checksum : b2cc12bf - correct
             Events : 12

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 913c687f:d215eba2:93c38d9a:37ca10cc
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Sun Oct 13 20:01:38 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
         Array Size : 5848150464 (5577.23 GiB 5988.51 GB)
      Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : d1b73e24:349c517d:b4009bb5:a93f95f5

        Update Time : Sun Oct 13 20:11:28 2019
           Checksum : 72411265 - correct
             Events : 134

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 0
       Array State : AAAA ('A' == active, '.' == missing)

      
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    I see 3 partitions sd[abc]3 which are part of a raid array created on Fri Oct 25 12:35:07 2019, and sdd3 with is part of array created on Sun Oct 13 20:01:38 2019. So I guess you have no disk left containing an original raid header? Or is your original array created on Oct 13?

    'Array sdd3' has a data offset of 262144 sectors, while sd[abc]3 has a data offset of 264192 sectors, maybe due to the internal bitmap, which is AFAIK not an option on the 54x.

    So if sdd3 is original, the first ~2000 sectors, which is 1MB, of the first part of the original filesystem is occupied by the raid header. Don't know if that is a real problem. The raidheader contains mainly nothing, but I don't know if it's also zeroed out.

    If you lost your original disk order, theoretically you'll have to try each order, until you get something which contains a valid filesystem. And it should mount right away. A wrong order can seem to contain a valid filesystem, but is unmountable, and repairing will destroy everything.

    The number of possibilities on a 4 disk system is 4! is 24.
    When sdd3 is original, we know that was 'Active device 0', so only 3! = 6 possibilities are left.







  • ksr7
    ksr7 Posts: 15  Freshman Member
    Options
    Well, I tried the solution wit the 3 disks..nothing appened...

    Now I'm trying the combinations with four disks, but I noticed something bad...3 disks out of 4 are marked as "spare", though I didn't make any changes, but mdadm --examine still gives me the same information...
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Spare? What exactly are you doing? Juggling the physical disks? The idea is to create an array with 'mdadm --create ...' with the 3 partitions and a 'missing' in different sequences. That should never give a spare.
    Physically moving the disks should have no effect at all, as threir role in the array is written in the header.
  • ksr7
    ksr7 Posts: 15  Freshman Member
    Options
    Ok I got it.

    Right now I'm having this

    mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --c
    hunk=64K  --layout=left-symmetric /dev/md2 /dev/sda3 /dev/sdb3 missing /dev/sdd3
     
    mdadm: super1.x cannot open /dev/sda3: Device or resource busy
    mdadm: /dev/sda3 is not suitable for this array.
    mdadm: super1.x cannot open /dev/sdb3: Device or resource busy
    mdadm: /dev/sdb3 is not suitable for this array.
    mdadm: super1.x cannot open /dev/sdd3: Device or resource busy
    mdadm: /dev/sdd3 is not suitable for this array.
    mdadm: create aborted

Consumer Product Help Center