NAS540 - Can any data be recovered?

Options
2

All Replies

  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Oh right. We created the array, telling the system is was a complete, healthy array. But in reality one of the members is empty, and so the array doesn't contain a valid filesystem.
    So the array has to be created degraded. That is easy, just exchange the empty disk with 'missing'. As sdb3 didn't have a raid header I suppose you exchanged the first disk? In that case the command should be

    mdadm --stop /dev/md2
    mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 missing /dev/sdc3 /dev/sdd3 /dev/sde3



  • octhrope
    octhrope Posts: 14
    edited October 2021
    Options
    awesome. done. this isnt going to wipe anything?  its asking to setup a new volume.

    i did a reboot without creating a volume, its beeping again. a good thing? =)
  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    No, it doesn't wipe anything, as long as you don't add the 4th disk to the array while you don't have your volume back.
    The beeping is to be expected, you have a degraded array.
    It's a bummer that you don't have your volume back. It was the left disk you exchanged, didn't you?
    Maybe you have got a 'logical volume' on your physical array. Does

    vgscan
    lvscan -a

     (as root) give any volumes?

  • octhrope
    Options
    Doesnt look good:

    ~ # vgscan
      Reading all physical volumes.  This may take a while...
    ~ # lvscan -a
    ~ # vgscan
      Reading all physical volumes.  This may take a while...
    ~ # lvscan -a
    ~ # vgscan
      Reading all physical volumes.  This may take a while...
    ~ # lvscan -a
    ~ #


  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Not much options are remaining. You can re-create the array as if the last disk was exchanged, and (partly?) empty:

    mdadm --stop /dev/md2
    mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 /dev/sdb3 /dev/sdc3 /dev/sdd3 missing

    If that doesn't give you a volume, you can put it back to the previous one (with missing as first member), as that is logically the most probable topology, and try to repair the filesystem:

    e2fsck /dev/md2



  • octhrope
    Options
    Commands failed but maybe this is promising?


    The superblock could not be read or does not describe a valid ext2/ext3/ext4
    filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock:
        e2fsck -b 8193 <device>
     or
        e2fsck -b 32768 <device>





  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    No, sorry. That is the standard blurb e2fsck gives if it doesn't recognize the filesystem at all. When creating the filesystem mke2fs give a list of addresses where superblocks are stored. Theoretically you could provide one of them to e2fsck, in case you had written them down, en for some reason the default superblock was wiped, and the rest of the filesystem not.
    Well, that's not the case here. You didn't write down the addresses because you couldn't, as the firmware didn't pass them (and I've never met someone who actually writes down that addresses. Backups a more powerful and easier), and you don't have a wiped superblock. If something goes wrong with raid5, you have intermittend lacking data blocks over the while volume. 128kB data followed by 64kB garbage, and that repeating.

    Have you ever exchanged the most left and most right disk in the past? For the raid array manager that doesn't matter, as the role of the disk is written in the header. But when the array has to be re-created it does matter.
    How about the old disk? How dead is it?
  • octhrope
    Options
    far left (1) was the original offender.  far right showed blank after the whole thing fell apart and i started trying to recover.  i think they might be in there backwards at the moment though. disk 1 is in disk 4s spot.  Ill swap them around
  • octhrope
    Options
    none of the commands run, says the resources are busy. looks like the end.  unless it was a raid 10, i dont remember...
  • octhrope
    Options
    moved the drives around.  none of the commands run now, tells me the resource is busy.  but there is an md1?

    mdadm: No devices given.
    ~ # mdadm --detail /dev/md2
    mdadm: cannot open /dev/md2: No such file or directory
    ~ # mdadm --detail /dev/md1
    /dev/md1:
            Version : 1.2
      Creation Time : Mon Oct 16 18:22:38 2017
         Raid Level : raid1
         Array Size : 1998784 (1952.27 MiB 2046.75 MB)
      Used Dev Size : 1998784 (1952.27 MiB 2046.75 MB)
       Raid Devices : 4
      Total Devices : 4
        Persistence : Superblock is persistent

        Update Time : Mon Oct  4 09:55:08 2021
              State : clean
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 0
      Spare Devices : 0

               Name : NAS540:1
               UUID : 061a1f51:a01573a4:4d28d8ab:401c5dd5
             Events : 90

        Number   Major   Minor   RaidDevice State
           4       8       18        0      active sync   /dev/sdb2
           1       8       34        1      active sync   /dev/sdc2
           2       8       50        2      active sync   /dev/sdd2
           5       8       66        3      active sync   /dev/sde2
    ~ # mdadm --detail /dev/md2
    mdadm: cannot open /dev/md2: No such file or directory
    ~ #
    ~ #

    this mean anything?

Consumer Product Help Center