NAS540: Volume down, repairing failed, how to restore data?

Options
13567

All Replies

  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Yes, I did, at least I tried:

    <div>~ # cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]</div><div><br>md3 : active raid1 sdc3[0] sdd3[1]</div><div>1949383488 blocks super 1.2 [2/2] [UU]</div><div>md2 : active raid1 sdb3[1]</div><div>1949383488 blocks super 1.2 [2/1] [_U]</div><div><br>md1 : active raid1 sdb2[4] sdd2[6] sdc2[5]</div><div>1998784 blocks super 1.2 [4/3] [U_UU]<br><br></div><div>md0 : active raid1 sdb1[4] sdd1[6] sdc1[5]<br>1997760 blocks super 1.2 [4/3] [U_UU]
    
    unused devices: <none></div>



    and --examine shows:

    <div>~ # mdadm --examine /dev/sd[abcd]3</div><div>mdadm: cannot open /dev/sda3: No such device or address</div><div>/dev/sdb3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : 484fedb2:c372673b:e71dd228:1828bb3f</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : XXX:2&nbsp; (local to host XXX)</div><div>&nbsp; Creation Time : Fri May 31 22:00:27 2019</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid1</div><div>&nbsp; &nbsp;Raid Devices : 2</div><div><br></div><div>&nbsp;Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 1949383488 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : 6837f4b4:26f3ea37:5e2107e3:82ca12f0</div><div><br></div><div>&nbsp; &nbsp; Update Time : Mon Jun&nbsp; 3 04:51:28 2019</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : d6ff1d37 - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 4</div><div><br></div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 1</div><div>&nbsp; &nbsp;Array State : .A ('A' == active, '.' == missing)</div><div>/dev/sdc3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : da1a17ea:f2a26d21:d35d0f62:4daa646d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : XXX:3&nbsp; (local to host XXX)</div><div>&nbsp; Creation Time : Mon May&nbsp; 9 16:11:30 2016</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid1</div><div>&nbsp; &nbsp;Raid Devices : 2</div><div><br></div><div>&nbsp;Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 1949383488 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : ca63fccc:a5346ad6:6c4a09ed:be7f9a64</div><div><br></div><div>&nbsp; &nbsp; Update Time : Fri May 31 22:02:15 2019</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 8a623713 - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 2</div><div><br></div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 0</div><div>&nbsp; &nbsp;Array State : AA ('A' == active, '.' == missing)</div><div>/dev/sdd3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : da1a17ea:f2a26d21:d35d0f62:4daa646d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : XXX:3&nbsp; (local to host XXX)</div><div>&nbsp; Creation Time : Mon May&nbsp; 9 16:11:30 2016</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid1</div><div>&nbsp; &nbsp;Raid Devices : 2</div><div><br></div><div>&nbsp;Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 1949383488 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : 201d9c23:684df915:3de56760:e8bcfcfe</div><div><br></div><div>&nbsp; &nbsp; Update Time : Fri May 31 22:02:15 2019</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 2e51e127 - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 2</div><div><br></div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 1</div><div>&nbsp; &nbsp;Array State : AA ('A' == active, '.' == missing)</div>


    It keeps beeping and I get from the web interface that it wants me to create at least a volume ( I see no reason for doing so - it would erase my disk/disks on creating one). That would be a disaster indeed.












  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Dunno if it helps but my initial configuration was raid10 
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Dunno if it helps but my initial configuration was raid10
    At least it changes something. I have no experience with raid10 on a ZyXEL. The headers are all 3 for raid1. So either you used mdadm to break things before you asked for help here, or ZyXEL uses a layered approach.
    If a layered approach is used, the array md2 and md3 are two members of a 3th raid0 array. If so, that should be visible in
    <div><br></div><div>mdadm --examine /dev/md2 /dev/md3</div>

  • basetron
    basetron Posts: 13  Freshman Member
    Options
    I don't expect it to work since I've destroyed  my  raid by 

    mdadm --create --assume-clean --level=1 --raid-devices=2 --metadata=1.2 /dev/md2 missing /dev/sdb3<br>
    but the output is as follows:

    <div>mdadm: No md superblock detected on /dev/md2.
    
    </div><div>mdadm: No md superblock detected on /dev/md3.<br></div>


  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    I don't expect it to work since I've destroyed  my  raid
    That command only generates a new header, and leaves the content of the array intact. So if the raid was just a wrapper around another raid, the inner raid should still be intact.
    But according to mdadm the arrays don't contain superblocks.

    How sure are you that you had raid10? To build a raid10 array you have to start with 4 disks, at least. But the headers you posted at 29 may shows 2 raid1 arrays, one created at Fri Jul 17 21:09:32 2015, and one at Mon May  9 17:11:30 2016. So I don't see a way how this could be build as raid10 by firmware. (Yes, I can think of some obscure scenario's to build it manually, but I believe that's beyond your skills).

  • basetron
    basetron Posts: 13  Freshman Member
    Options
    I'm sure that it was raid10. Creating one isn't anything exceptionally difficult. If one drive fails (what has actually happened) raid10 enters "degraded mode". But on my attempt to replace the faulty drive some of my shares vanished (what shouldn't have happened) and rebuilding the array didn't help.  mdadm automatic recovery almost always fails - due to event count... forcing and manual adding failed too.

    [previous mdstat]
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md3 : active raid1 sdc3[0] sdd3[1]      1949383488 blocks super 1.2 [2/2] [UU]
    md2 : inactive sdb3[2](S)      1949383680 blocks super 1.2
    md1 : active raid1 sdb2[4] sdc2[5] sdd2[6]      1998784 blocks super 1.2 [4/3] [U_UU]
    md0 : active raid1 sdb1[4] sdc1[5] sdd1[6]      1997760 blocks super 1.2 [4/3] [U_UU]

    [current mdstat]
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md3 : active raid1 sdc3[0] sdd3[1] 1949383488 blocks super 1.2 [2/2] [UU]
    md2 : active raid1 sdb3[1] 1949383488 blocks super 1.2 [2/1] [_U]
    md1 : active raid1 sdb2[4] sdd2[6] sdc2[5]  1998784 blocks super 1.2 [4/3] [U_UU]
    md0 : active raid1 sdb1[4] sdd1[6] sdc1[5]  1997760 blocks super 1.2 [4/3] [U_UU]

    I'm running on 4 disks.  Raid10 is nothing unusual  I used to have one, on the same drives, on Debian - I built it manually. It's quite easy to build but in case of troubles (like degraded raid due to disk failure) I couldn't get my files back in any other way but with mounting my drives and copying data but this is a very stupid thing to do if one can reassemble the raid.  I switched to Zyxel  in hope that it is something beyond a mere software raid and has more tools, therefore, I didn't bother with saving raid status.

    Is there anything left but to mount the drives on another system and copy my files to a different drive? 
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Is there anything left but to mount the drives on another system and copy my files to a different drive?
    I you know a way, please proceed. But I don't see how to assemble the volume. The members are raid1, not raid10, so the only way to create a raid10-ish volume is when inside or outside the current arrays there's a raid0 array. Outside is not possible, I think, and mdadm says no about inside.

    Raid0 is a bitch to recover, as it chops the members to small chunks which are interleaved. So when failing to assemble the array, lowlevel recovery is almost impossible.

    Further I really don't see how a single raid10 array can have members from which the array creation date deviates nearly a year. AFAIK the original date is written to an exchanged member.


  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Hi, after endless attempts (all in vain) to reassemble the matrix I'm just wondering what type of raid it might've been?

    I started up with 2 volumes, each comprising of 2 drives.

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]<br>md3 : active raid1 sdc3[0] sdd3[1]&nbsp; &nbsp; &nbsp; 1949383488 blocks super 1.2 [2/2] [UU]<br>md2 : inactive sdb3[2](S)&nbsp; &nbsp; &nbsp; 1949383680 blocks super 1.2<br>md1 : active raid1 sdb2[4] sdc2[5] sdd2[6]&nbsp; &nbsp; &nbsp; 1998784 blocks super 1.2 [4/3] [U_UU]<br>md0 : active raid1 sdb1[4] sdc1[5] sdd1[6]&nbsp; &nbsp; &nbsp; 1997760 blocks super 1.2 [4/3] [U_UU]<br>unused devices: <none>
    and the mdadm was

    ARRAY /dev/md0 level=raid1 num-devices=4 metadata=1.2 name=NAS540:0 UUID=60e20528:5ff04d12:9729d455:3fdd0b58   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1<br>ARRAY /dev/md1 level=raid1 num-devices=4 metadata=1.2 name=NAS540:1 UUID=069aed49:2bf44e5c:b42db67f:82d34ecc   devices=/dev/sdb2,/dev/sdc2,/dev/sdd2<br>ARRAY /dev/md3 level=raid1 num-devices=2 metadata=1.2 name=XXX:3 UUID=da1a17ea:f2a26d21:d35d0f62:4daa646d   devices=/dev/sdc3,/dev/sdd3
    So it looks as if there are 2 volumes of raid1 instead of raid10. Could you please give me a hint on how to reassemble command should look like, assuming that there were 2 volumes of raid1? I'm quite afraid of losing my data, cause after 3 weeks of struggling I'm still in the same point (or even I moved backwards, cause now data from volume 2 are no longer available - it started all after issuing the command below )

    mdadm --create --assume-clean --level=1 --raid-devices=2 --metadata=1.2 /dev/md2 missing /dev/sdb3<br>
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    The data on md3 disappeared after you created md2?

    I have figured that there's another way to create a raid10-ish volume, using 'logical volumes'. In that case you have 2 raid1 volumes, which each contain a volume group, and have one logical volume together. I don't know if I'm using the right naming.
    If that is the case, I'd expect 'lvscan --all' and/or 'vgscan' output something useful.

    BTW, that create command looks fine to me.
  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Well,  I know how it sounds but this exactly what happened.

    Before execution of the  --create command I had 1st volume accessible, and the second was unavailable. Afterwards, both went down.

    As soon as I'm at home I'll come up with the output from lvscan,

    Thanks,

    B.

Consumer Product Help Center