NAS540 - Can any data be recovered?

Unit died last night and is showing 2 drives as empty and 2 as hot spare.  Screen shots added.  Any suggestions on how to recover any of the data?



Thanks all

Accepted Solution

  • Mijzelf
    Mijzelf Posts: 2,745  Guru Member
    250 Answers 2500 Comments Friend Collector Sixth Anniversary
    Answer ✓
    The physical sequence does not really matter. What matters is the role the disks have in the array. A four disk raid5 array has four different roles, numbered from 0 to 3. When you create such an array by firmware, the physical sequence is used.
    Normally the disks are from left to right called sda, sdb, sdc and sdd. When you have an SD card inserted a boot (or maybe an USB thumb disk) it is shifted, and in your case it's sdb-sde.
    The third partition on each disk is used for the data volume, so that is sdb3-sde3.

    We recreated the array, as it's (AFAIK) not possible to force-reinsert a member to an existing array, while your array was down due to too less members left. So one or more members had to be force-reinserted.
    The command to create the array is

    mdadm --create <options> mdX role0 role1 role2 role3

    where mdX is the device name of the array to be created, and role0-role3 are the device names of the members with that role. One of these may be 'missing', to create a degraded array.

    We know the roles of sdc3 and sdd3 (role1 and 2), as they were listed in the 'mdadm --examine' in the beginning of this thread. sdb3 didn't have a valid raid header, and sde3 was not listed because an sda was injected, but I assumed it to be role3.
    Now things get complicated. The initial created array didn't give a valid filesystem, and I assumed that was because sdb3 was actually empty. But maybe it was actually sde3 which was empty, and the non existing raid header of sdb3 was a result of the crash.
    So we tried both 'missing /dev/sdc3 /dev/sdd3 /dev/sde3' and '/dev/sdb3 /dev/sdc3 /dev/sdd3 missing'. But if you swapped the disks, that should have been 'missing /dev/sdc3 /dev/sdd3 /dev/sdb3' and and '/dev/sde3 /dev/sdc3 /dev/sdd3 missing'.
    If you now swapped them back again, the original sequence works again after re-creating the array. You can swap the disks of a created array without problems, as the raid manager reads the headers. (Of course it becomes more complicated to repair things, if you don't know the logical sequence of the disks. So it's better to keep the logical and physical sequence the same.)

    When you pull the left disk, all other disks shift after a reboot. sdc becomes sdb. So that is also something to keep in mind. And if you pull your SD card, everything also shifts.

    The command 'cat /proc/mdstat'  shows the active raid arrays. On a NAS5xx there are normally at least 3 of them, one swap array, one firmware array (md0 and md1, don't know which is what) and one or more data arrays, md2 and higher. So you can ignore md0 and md1. (Although /proc/mdstat in this case tells they have 4 members, meaning all disks at least partly work, which is nice to know)

    You can see in /proc/mdstat that md0 is also a raid5 array. It is absolutely possible that their roles are identical to the roles in md2. That could be a way to find the original sequence of the disks:

    mdadm --examine /dev/sd[bcde]1

    Problem here is that it is possible to think of scenarios which make their sequence different. For instance exchanging the disks (md0 is still fine, but the physical sequence is different from the logical) and then create a new data volume. Boom! Different logical sequence.

    I don't know what 'resource is busy'  means here. Assuming you are trying to create the array, I think that either the array device, or one of the members is busy. For the array device you can do a
    mdadm --stop /dev/md2

    for the members I don't know. It depends on how they are busy. For the raid device you can also use /dev/md3 or higher, as these are certainly not busy.

    I hope I made clear how things work, that should make it easier to understand what we (and mainly you) are doing.

«13

All Replies

  • Outputs requested in previous posts. ~ # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md2 : inactive sdc3[1](S) sdb3[4](S) sdd3[2](S) 5848151040 blocks super 1.2 md1 : active raid1 sde2[5] sdb2[4] sdc2[1] sdd2[2] 1998784 blocks super 1.2 [4/4] [UUUU] md0 : active raid1 sde1[5] sdb1[4] sdc1[1] sdd1[2] 1997760 blocks super 1.2 [4/4] [UUUU] unused devices: ~ # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md2 : inactive sdc3[1](S) sdb3[4](S) sdd3[2](S) 5848151040 blocks super 1.2 md1 : active raid1 sde2[5] sdb2[4] sdc2[1] sdd2[2] 1998784 blocks super 1.2 [4/4] [UUUU] md0 : active raid1 sde1[5] sdb1[4] sdc1[1] sdd1[2] 1997760 blocks super 1.2 [4/4] [UUUU] unused devices: ~ # cat /proc/partitions major minor #blocks name 7 0 146432 loop0 31 0 256 mtdblock0 31 1 512 mtdblock1 31 2 256 mtdblock2 31 3 10240 mtdblock3 31 4 10240 mtdblock4 31 5 112640 mtdblock5 31 6 10240 mtdblock6 31 7 112640 mtdblock7 31 8 6144 mtdblock8 8 0 15558144 sda 8 1 15556608 sda1 8 16 1953514584 sdb 8 17 1998848 sdb1 8 18 1999872 sdb2 8 19 1949514752 sdb3 8 32 1953514584 sdc 8 33 1998848 sdc1 8 34 1999872 sdc2 8 35 1949514752 sdc3 8 48 1953514584 sdd 8 49 1998848 sdd1 8 50 1999872 sdd2 8 51 1949514752 sdd3 8 64 1953514584 sde 8 65 1998848 sde1 8 66 1999872 sde2 8 67 1949514752 sde3 31 9 102424 mtdblock9 9 0 1997760 md0 9 1 1998784 md1 31 10 4464 mtdblock10 ~ #
  • Mijzelf
    Mijzelf Posts: 2,745  Guru Member
    250 Answers 2500 Comments Friend Collector Sixth Anniversary
    Can  you enable the ssh server (somewhere in system->network) login as root (admin password), you can use PuTTY for that, and post the output of

    cat /proc/mdstat
    mdadm --examine /dev/sd[abcd]3


  • i may have jumped the gun... i started following some of your older post with similar issues. at the moment i am attempting to recover data using software. but its hit and miss. ill follow up after it completes as the drives are in use. 
  • / $ cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : inactive sdc3[1](S) sde3[4](S) sdd3[2](S)
          5848151040 blocks super 1.2

    md1 : active raid1 sde2[4] sdb2[5] sdd2[2] sdc2[1]
          1998784 blocks super 1.2 [4/4] [UUUU]

    md0 : active raid5 sdd1[0] sdc1[3] sdb1[2] sde1[1]
          5990400 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    unused devices: <none>
    / $ mdadm --examine /dev/sd[abcd]3


  • Mijzelf
    Mijzelf Posts: 2,745  Guru Member
    250 Answers 2500 Comments Friend Collector Sixth Anniversary
    octhrope said:
    / $ mdadm --examine /dev/sd[abcd]3


    You are not logged in as root.
  • ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : inactive sdc3[1](S) sde3[4](S) sdd3[2](S)
          5848151040 blocks super 1.2

    md1 : active raid1 sde2[4] sdb2[5] sdd2[2] sdc2[1]
          1998784 blocks super 1.2 [4/4] [UUUU]

    md0 : active raid5 sdd1[0] sdc1[3] sdb1[2] sde1[1]
          5990400 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    unused devices: <none>
    ~ # mdadm --examine /dev/sd[abcd]3
    mdadm: cannot open /dev/sda3: No such device or address
    mdadm: No md superblock detected on /dev/sdb3.
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : cd50df70:9eea3ff2:14934590:3237388f
               Name : NAS540:2
      Creation Time : Mon Oct 16 18:22:39 2017
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 4633040a:942542bf:ba1f3b82:8325eb49

        Update Time : Mon Oct  4 05:54:27 2021
           Checksum : b48d9305 - correct
             Events : 1309

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AA. ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : cd50df70:9eea3ff2:14934590:3237388f
               Name : NAS540:2
      Creation Time : Mon Oct 16 18:22:39 2017
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 5d1e395d:76a8eb40:e150d42e:56333fbb

        Update Time : Mon Oct  4 05:54:27 2021
           Checksum : a7594001 - correct
             Events : 1309

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AA. ('A' == active, '.' == missing)
    ~ #


  • Mijzelf
    Mijzelf Posts: 2,745  Guru Member
    250 Answers 2500 Comments Friend Collector Sixth Anniversary
    edited October 2021
    Do you have an USB stick or SD card inserted? sda doesn't seem to be a harddisk. Are you aware if anything happened Mon Oct  4 05:54:27 UTC? Powerfail? That seems to be the time two disks were kicked from the array, and one (sdb) doesn't have a raid header anymore, while it's other partitions are still a member of the swap and firmware arrays.
    sdc and sdd are active device 1 and 2 from the array (counted from 0), so I think it's a good guess that sdb should be device 0, and sde device 3. As the other 2 disks seem to be alive, I think you can build a new array around your data.

    mdadm --stop /dev/md2
    mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3

    That are two lines, both starting with mdadm. And I assume that you don't remove the USB stick/SD card.

    BTW, is your fan working? The disks seem to be a bit hot. 57C is not extreme, but I suppose the disks are almost idle, in which case it is more than I would expect.

  • octhrope
    octhrope Posts: 14
    edited October 2021
    i do have a sd card in there i was doing testing years ago and forgot about it.
    fans working, one drive is on its last leg, i have a new set of disk to replace after data recovery.

    Yeah that was the day the system hung, i was attempting to rebuild the array. but i couldnt access the system and around UTC 2:00 i manually rebooted it. 

    I ran the commands and now i have an array but no volume or access to data.

    thank you with all your help with this so far, i really appreciate it.
  • Mijzelf
    Mijzelf Posts: 2,745  Guru Member
    250 Answers 2500 Comments Friend Collector Sixth Anniversary
    octhrope said:
    i was attempting to rebuild the array. but i couldnt access the system and around UTC 2:00 i manually rebooted it. 
    You were attempting to rebuild the array? What exactly do you mean?
    About no volume, did you reboot the NAS? If not, try it.
  • octhrope
    octhrope Posts: 14
    edited October 2021
    a drive was throwing an error and it asked me to replace it so i did and it was copying the data. i let it run for 24+ hours and then i was unable to login or see the shares. so i waited a bit longer and then rebooted. Then it came back in this state. so whatever happened took place at my time midnight, based on your previous note. it failed and now im in this state. 

    I attempted to pull the data via linux but since the array wouldn't mount properly the data was only partially visible.  im hoping to get this will at least get me to a point where i can grab most of it and then replace all the drives and start over.
    -----

    i have rebooted and it says there are no volumes.  the one we created is in red and if i click create volume it says insert drives first,  .

Consumer Product Help Center