NAS 540 has lost all data.
All Replies
-
Thanks for your detailed explanation. I entered the command. Here is the result:~ $ mdadm --examine / dev / sd [abcd] 3mdadm: Can not open / dev / sda3: permission deniedmdadm: Can not open / dev / sdb3: permission deniedmdadm: Can not open / dev / sdc3: Permission Deniedmdadm: Can not open / dev / sdd3: permission deniedCan it be that it is related to the fact that the volumes are no longer set up in the Raid 5 array?-------------------------------------------------- ---------------------------------On closer inspection, I found that only the user admins are available. Everyone else is gone.The shared folders are still visible, but have the status: Lost (see pictures in the appendix).Maybe that helps to get a clearer picture of the whole thing.Greetings and thanks0
-
Is there a way to restore permissions to access all the System Admin folders? Probably before the data belongs to the folder restored?
0 -
mdadm: Can not open / dev / sda3: permission denied
Ah, sorry. after a new login a new 'su' is needed:
<p>su</p><br><p>mdadm --examine /dev/sd[abcd]3</p><p></p>
On closer inspection, I found that only the user admins are available. Everyone else is gone. The shared folders are still visible, but have the status: Lost (see pictures in the appendix).
The 'shares' are no more than pointers to a directory on the data volume. There is no data volume, so the shares point to nothing. The share database is stored outside the data volume, and so it's still there.0 -
~ # mdadm --examine /dev/sd[abcd]3/dev/sda3:Magic : a92b4efcVersion : 1.2Feature Map : 0x2Array UUID : 04004588:1416a156:bfa20978:5756d5aaName : NAS540:2 (local to host NAS540)Creation Time : Sat Sep 19 19:20:34 2015Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778405312 (8371.74 GiB 8989.09 GB)Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsRecovery Offset : 0 sectorsState : cleanDevice UUID : fabc850c:d92b27c4:ef5b37de:ffcafb50Update Time : Fri Dec 21 05:39:03 2018Checksum : 7de6aeb1 - correctEvents : 24293Layout : left-symmetricChunk Size : 64KDevice Role : Active device 0Array State : AAAA ('A' == active, '.' == missing)/dev/sdb3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 04004588:1416a156:bfa20978:5756d5aaName : NAS540:2 (local to host NAS540)Creation Time : Sat Sep 19 19:20:34 2015Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778405888 (8371.74 GiB 8989.09 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 7adf8778:894abce0:2146d412:c64e1c7fUpdate Time : Fri Dec 21 05:48:28 2018Checksum : 693c6190 - correctEvents : 24301Layout : left-symmetricChunk Size : 64KDevice Role : Active device 1Array State : .AAA ('A' == active, '.' == missing)/dev/sdc3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 04004588:1416a156:bfa20978:5756d5aaName : NAS540:2 (local to host NAS540)Creation Time : Sat Sep 19 19:20:34 2015Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778405888 (8371.74 GiB 8989.09 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 1f1b9e0e:9a037d12:13ff3598:3e948926Update Time : Fri Dec 21 05:48:28 2018Checksum : 5de254b0 - correctEvents : 24301Layout : left-symmetricChunk Size : 64KDevice Role : Active device 2Array State : .AAA ('A' == active, '.' == missing)/dev/sdd3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 04004588:1416a156:bfa20978:5756d5aaName : NAS540:2 (local to host NAS540)Creation Time : Sat Sep 19 19:20:34 2015Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778405888 (8371.74 GiB 8989.09 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 8728209d:b362321a:0fbc6cd2:e3926f5dUpdate Time : Fri Dec 21 05:48:28 2018Checksum : 65367cd4 - correctEvents : 24301Layout : left-symmetricChunk Size : 64KDevice Role : Active device 3Array State : .AAA ('A' == active, '.' == missing)0
-
Sorry, that with the "su", could have thought me too!
0 -
Hmm. For some reason sda2 was kicked off the array at Dec 21 05:48:28 2018. The other 3 members seem healthy, so the array should be degraded. Instead it's gone.Can you assemble the array manually (degraded)?
<div>su</div><div><br></div><div>mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run</div>
0 -
Hello!here the result of the input:~ # mdadm --assemble / dev / md2 / dev / sd [bcd] 3 - executemdadm: / dev / md2 was started with 3 drives (out of 4).At the second input I had the following result:~ # mdadm --assemble / dev / md2 / dev / sd [bcd] 3 - executemdadm: / dev / sdb3 is busy - skippedmdadm: / dev / sdc3 is busy - skippedmdadm: / dev / sdd3 is busy - skippedHere you can see the latest screenshots of the "Storage Manager".0
-
mdadm: / dev / md2 was started with 3 drives (out of 4).
At this moment the data volume was back. Didn't it automagically show up in the volume list?
0 -
-
Can you try to mount the volume?
<div>su</div><div><br></div><div>mkdir -p /mnt/md2</div><div><br></div><div>mount /dev/md2 /mnt/md2</div>
0
Categories
- All Categories
- 415 Beta Program
- 2.5K Nebula
- 152 Nebula Ideas
- 101 Nebula Status and Incidents
- 5.8K Security
- 296 USG FLEX H Series
- 281 Security Ideas
- 1.5K Switch
- 77 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.5K Consumer Product
- 254 Service & License
- 396 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 87 About Community
- 76 Security Highlight