NAS540: Volume down, repairing failed, how to restore data?

Options
12357

All Replies

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Maybe
    <div>mdadm --stop /dev/md2</div><div></div>
    first?
  • ksr7
    ksr7 Posts: 15  Freshman Member
    edited October 2019
    Options
    I did it already

    # mdadm --stop /dev/md2
    mdadm: stopped /dev/md2
    ~ # mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --c
    hunk=64K  --layout=left-symmetric /dev/md2 /dev/sda3 /dev/sdb3 missing /dev/sdd3
    mdadm: super1.x cannot open /dev/sda3: Device or resource busy
    mdadm: /dev/sda3 is not suitable for this array.
    mdadm: super1.x cannot open /dev/sdb3: Device or resource busy
    mdadm: /dev/sdb3 is not suitable for this array.
    mdadm: super1.x cannot open /dev/sdd3: Device or resource busy
    mdadm: /dev/sdd3 is not suitable for this array.
    mdadm: create aborted

    Is it maybe better to mount all drives on a computer with a Linux live instead of working directly on the nas?
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Maybe another md device is occupying the partitions? Have a look woth
    cat /proc/mdstat
    Is it maybe better to mount all drives on a computer with a Linux live instead of working directly on the nas?
    It shouldn't make any difference. Most people have problems to connect 4 sata disks to a computer. If you have the possibility, be my guest. But beware, somehow some of your raidmembers had a bitmap in the header, while at least one hadn't. I'm pretty sure the NAS doesn't add bitmaps to the arrays.
  • ksr7
    ksr7 Posts: 15  Freshman Member
    edited October 2019
    Options
    This is what cat /proc/mdstat says

    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
    md3 : active raid5 sda3[1] sdb3[3] sdd3[2]
          5848147968 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]
          bitmap: 0/15 pages [0KB], 65536KB chunk

    md2 : inactive sdc3[0](S)
          1949383680 blocks super 1.2
           
    md1 : active raid1 sdb2[4] sdd2[6] sdc2[5] sda2[7]
          1998784 blocks super 1.2 [4/4] [UUUU]
          
    md0 : active raid1 sdb1[4] sdd1[6] sdc1[5] sda1[7]
          1997760 blocks super 1.2 [4/4] [UUUU]

    So I guess md3 is the one working...right?
          

  • ksr7
    ksr7 Posts: 15  Freshman Member
    Options
    Well, I did all the 6 possible combination, leave it "missing" in the third position and switching sda sdb and sdd accordingly, always stopping before /dev/md2 ( or 3 sometimes, nas swtiched the active one, dunno why ) but I have the same result, can't rebuild the raid...any new idea?
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    leave it "missing" in the third position

    Why the 3rd? Do you know something you didn't post?

  • ksr7
    ksr7 Posts: 15  Freshman Member
    Options
    I mean, since you said 3 drives on 4 are active, I marked the one with wrong details with missing...am I wrong?

    This is what I tried : 

    mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sdd3 /dev/sdb3 missing /dev/sda3

    Where drive mounted as sdc3 has index different from the others

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Right. You had sd[abc]3, which couldn't possibly contain a header generated by the NAS, as their headers contained a bitmap. For what it's worth, they were 'Active device' 1,2 and 3 of a 4 disk array.
    sdd3 looked like an original, NAS-made member, but it's timestamps made me doubt, created on Sun Oct 13 20:01:38 2019, last updated on Sun Oct 13 20:11:28 2019. So that disk was only 10 minutes part of an array.

    I asked you about that status, but you didn't answer.

    Partition sdd3 was Active device 0, so if that disk is the only one from which the original position is known, the 1st disk is missing. But if the header on sdd3 is bogus, there is absolutely nothing to say about which disk is missing, unless you have extra info.
  • ksr7
    ksr7 Posts: 15  Freshman Member
    Options
    Sorry I think I miss the part of status you asked, what do you need precisely?
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    So I guess you have no disk left containing an original raid header? Or is your original array created on Oct 13?


Consumer Product Help Center