NAS540: Volume down, no option to repair
All Replies
-
Right. Your second disk is dead, I'm afraid, but the remaining disks should be enough to rebuild the array (degraded). Unfortunately you disk sdd3 has lost it's device role, it's now 'spare', but assuming the disks are in the same physical position as when you created the array, I assume that should be 'Active device 3'.In that case the command to re-build the array ismdadm --stop /dev/md2mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric --bitmap=none /dev/md2 /dev/sda3 missing /dev/sdc3 /dev/sdd3That are two lines, each starting with mdadm. I don't dare to put it in code tags, as this forum is acting strange.When your array is up again, you can pull the second disk, and put a new disk in to get it healthy again.
1 -
Thank you for the great assistance.
mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric --bitmap=none /dev/md2 /dev/sda3 missing /dev/sdc3 /dev/sdd3
I got the error, it said need to use --grow?0 -
Omit the --bitmap=none
1 -
I encountered the similar problem on my NAS540 with RAID 6.
Can somebody help me to solve it?# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sda2[0] sdc2[5] sdd2[6] sdb2[4]
1998784 blocks super 1.2 [4/4] [UUUU]
md0 : active raid1 sda1[0] sdc1[5] sdd1[6] sdb1[4]
1997760 blocks super 1.2 [4/4] [UUUU]# cat /proc/partitionsmajor minor #blocks name7 0 147456 loop031 0 256 mtdblock031 1 512 mtdblock131 2 256 mtdblock231 3 10240 mtdblock331 4 10240 mtdblock431 5 112640 mtdblock531 6 10240 mtdblock631 7 112640 mtdblock731 8 6144 mtdblock88 0 5860522584 sda8 1 1998848 sda18 2 1999872 sda28 3 5856522240 sda38 16 5860522584 sdb8 17 1998848 sdb18 18 1999872 sdb28 19 5856522240 sdb38 32 5860522584 sdc8 33 1998848 sdc18 34 1999872 sdc28 35 5856522240 sdc38 48 5860522584 sdd8 49 1998848 sdd18 50 1999872 sdd28 51 5856522240 sdd331 9 102424 mtdblock99 0 1997760 md09 1 1998784 md131 10 4464 mtdblock10# mdadm --examine /dev/sd[abcd]3/dev/sda3:Magic : a92b4efcVersion : 1.2Feature Map : 0x4Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63Name : NAS540:2 (local to host NAS540)Creation Time : Sat Dec 14 05:08:13 2019Raid Level : raid6Raid Devices : 4Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)Array Size : 11712781952 (11170.18 GiB 11993.89 GB)Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 57579709:2a1053f9:4e0e390f:bcef6466Reshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)New Layout : left-symmetricUpdate Time : Sat Dec 21 05:08:45 2019Checksum : aeda0bb6 - correctEvents : 2801362Layout : left-symmetric-6Chunk Size : 64KDevice Role : Active device 0Array State : AAAA ('A' == active, '.' == missing)/dev/sdb3:Magic : a92b4efcVersion : 1.2Feature Map : 0x4Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63Name : NAS540:2 (local to host NAS540)Creation Time : Sat Dec 14 05:08:13 2019Raid Level : raid6Raid Devices : 4Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)Array Size : 11712781952 (11170.18 GiB 11993.89 GB)Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : activeDevice UUID : 27e3c53c:450be072:4b3e069c:247f1661Reshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)New Layout : left-symmetricUpdate Time : Sat Dec 21 05:08:45 2019Checksum : e3145207 - correctEvents : 2801362Layout : left-symmetric-6Chunk Size : 64KDevice Role : Active device 1Array State : AAAA ('A' == active, '.' == missing)/dev/sdc3:Magic : a92b4efcVersion : 1.2Feature Map : 0x4Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63Name : NAS540:2 (local to host NAS540)Creation Time : Sat Dec 14 05:08:13 2019Raid Level : raid6Raid Devices : 4Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)Array Size : 11712781952 (11170.18 GiB 11993.89 GB)Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : activeDevice UUID : af400f15:def9e284:5c8da320:6b6f1d92Reshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)New Layout : left-symmetricUpdate Time : Sat Dec 21 05:08:45 2019Checksum : 8304dd81 - correctEvents : 2801362Layout : left-symmetric-6Chunk Size : 64KDevice Role : Active device 2Array State : AAAA ('A' == active, '.' == missing)/dev/sdd3:Magic : a92b4efcVersion : 1.2Feature Map : 0x6Array UUID : a8c78b8e:4ab1fe37:160fd540:b7a8cd63Name : NAS540:2 (local to host NAS540)Creation Time : Sat Dec 14 05:08:13 2019Raid Level : raid6Raid Devices : 4Avail Dev Size : 11712782336 (5585.09 GiB 5996.94 GB)Array Size : 11712781952 (11170.18 GiB 11993.89 GB)Used Dev Size : 11712781952 (5585.09 GiB 5996.94 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsRecovery Offset : 7602176000 sectorsState : activeDevice UUID : 4e2d4e04:9660ec5a:0a69bd17:d20c07efReshape pos'n : 7602176000 (7250.00 GiB 7784.63 GB)New Layout : left-symmetricUpdate Time : Sat Dec 21 05:08:45 2019Checksum : 6170a9f2 - correctEvents : 2801362Layout : left-symmetric-6Chunk Size : 64KDevice Role : Active device 3Array State : AAAA ('A' == active, '.' == missing)
0 -
What happens if you try to assemble it?sumdadm --assemble /dev/md2 /dev/sd[abcd]30
-
Thank you for your reply. @Mijzelf
I still have the problem after tried to assemble my NAS540 via command.
# mdadm --assemble /dev/md2 /dev/sd[abcd]3mdadm: Failed to restore critical section for reshape, sorry.
Possibly you needed to specify the --backup-file
What should I do next?
Do you know what would be the locations of the 'raid-file/backup' holding the configuration on the system?0 -
what would be the locations of the 'raid-file/backup'
I don't think such a file exists. But if it exists, it is in /firmware/mnt/sysdisk/, /firmware/mnt/nand/ or /etc/zyxel/, being the only storage pools outside your data array. The latter 2 are in flash memory.
Your array was 'reshaping', and is stalled around 70%. Wat exactly was you doing?
0 -
Your array was 'reshaping', and is stalled around 70%. Wat exactly was you doing?
Don't know. What else can I do?
0 -
Have you checked the locations I gave you?You don't know what you were doing. So this problem came out of the blue? Is it as far as you know possible that one disk was dropped from the array, and automatically added again? On raid5 that would look different, but unfortunately I have no experience with raid6 problems.0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 147 Nebula Ideas
- 96 Nebula Status and Incidents
- 5.7K Security
- 262 USG FLEX H Series
- 271 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.4K Consumer Product
- 249 Service & License
- 387 News and Release
- 84 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.5K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 73 Security Highlight