NAS 540 has lost all data.

2

All Replies

  • Cha
    Cha Posts: 16  Freshman Member
    Thanks for your detailed explanation. I entered the command. Here is the result:


    ~ $ mdadm --examine / dev / sd [abcd] 3
    mdadm: Can not open / dev / sda3: permission denied
    mdadm: Can not open / dev / sdb3: permission denied
    mdadm: Can not open / dev / sdc3: Permission Denied
    mdadm: Can not open / dev / sdd3: permission denied


    Can it be that it is related to the fact that the volumes are no longer set up in the Raid 5 array?

    -------------------------------------------------- ---------------------------------

    On closer inspection, I found that only the user admins are available. Everyone else is gone.
    The shared folders are still visible, but have the status: Lost (see pictures in the appendix).

    Maybe that helps to get a clearer picture of the whole thing.
    Greetings and thanks
  • Cha
    Cha Posts: 16  Freshman Member
    Is there a way to restore permissions to access all the System Admin folders? Probably before the data belongs to the folder restored?
  • Mijzelf
    Mijzelf Posts: 2,858  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    mdadm: Can not open / dev / sda3: permission denied

    Ah, sorry. after a new login a new 'su' is needed:

    <p>su</p><br><p>mdadm --examine /dev/sd[abcd]3</p><p></p>
    On closer inspection, I found that only the user admins are available. Everyone else is gone. The shared folders are still visible, but have the status: Lost (see pictures in the appendix).


    The 'shares' are no more than pointers to a directory on the data volume. There is no data volume, so the shares point to nothing. The share database is stored outside the data volume, and so it's still there.
  • Cha
    Cha Posts: 16  Freshman Member
    ~ # mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x2
         Array UUID : 04004588:1416a156:bfa20978:5756d5aa
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Sat Sep 19 19:20:34 2015
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
    Recovery Offset : 0 sectors
              State : clean
        Device UUID : fabc850c:d92b27c4:ef5b37de:ffcafb50

        Update Time : Fri Dec 21 05:39:03 2018
           Checksum : 7de6aeb1 - correct
             Events : 24293

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 0
       Array State : AAAA ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 04004588:1416a156:bfa20978:5756d5aa
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Sat Sep 19 19:20:34 2015
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405888 (8371.74 GiB 8989.09 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 7adf8778:894abce0:2146d412:c64e1c7f

        Update Time : Fri Dec 21 05:48:28 2018
           Checksum : 693c6190 - correct
             Events : 24301

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 04004588:1416a156:bfa20978:5756d5aa
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Sat Sep 19 19:20:34 2015
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405888 (8371.74 GiB 8989.09 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 1f1b9e0e:9a037d12:13ff3598:3e948926

        Update Time : Fri Dec 21 05:48:28 2018
           Checksum : 5de254b0 - correct
             Events : 24301

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 04004588:1416a156:bfa20978:5756d5aa
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Sat Sep 19 19:20:34 2015
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405888 (8371.74 GiB 8989.09 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 8728209d:b362321a:0fbc6cd2:e3926f5d

        Update Time : Fri Dec 21 05:48:28 2018
           Checksum : 65367cd4 - correct
             Events : 24301

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : .AAA ('A' == active, '.' == missing)

  • Cha
    Cha Posts: 16  Freshman Member
    Sorry, that with the "su", could have thought me too!
  • Mijzelf
    Mijzelf Posts: 2,858  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Hmm. For some reason sda2 was kicked off the array at Dec 21 05:48:28 2018. The other 3 members seem healthy, so the array should be degraded. Instead it's gone.

    Can you assemble the array manually (degraded)?
    <div>su</div><div><br></div><div>mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run</div>
  • Cha
    Cha Posts: 16  Freshman Member
    Hello!
    here the result of the input:

    ~ # mdadm --assemble / dev / md2 / dev / sd [bcd] 3 - execute
    mdadm: / dev / md2 was started with 3 drives (out of 4).

    At the second input I had the following result:

    ~ # mdadm --assemble / dev / md2 / dev / sd [bcd] 3 - execute
    mdadm: / dev / sdb3 is busy - skipped
    mdadm: / dev / sdc3 is busy - skipped
    mdadm: / dev / sdd3 is busy - skipped

    Here you can see the latest screenshots of the "Storage Manager".
  • Mijzelf
    Mijzelf Posts: 2,858  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    mdadm: / dev / md2 was started with 3 drives (out of 4).

    At this moment the data volume was back. Didn't it automagically show up in the volume list?

  • Cha
    Cha Posts: 16  Freshman Member
    Here you can see the latest screenshots of the "Storage Manager".
  • Mijzelf
    Mijzelf Posts: 2,858  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Can you try to mount the volume?
    <div>su</div><div><br></div><div>mkdir -p /mnt/md2</div><div><br></div><div>mount /dev/md2 /mnt/md2</div>

Consumer Product Help Center