NAS540 Volume down ...need help
![tron7766](https://us.v-cdn.net/6029482/uploads/avatarstock/nQ3TX84RRUVK1.png)
tron7766
Posts: 8
Freshman Member
![](https://www.zyxel.com/library/assets/zyxel-forum/freshman_member.png)
hi to all,
ich need help. I have a nas540 with a raid5
4x3 TByte. After a hdd crash and make hdd error, the nas degraded the
raid5 volume. But then, i pull the wrong hdd , restore it an replace the
error hdd with an new.
Then i restart the nas and it show me that the volume is down ?
Puh no entry to my data. What are the next steps?
Here is a screenshot with the mdstat
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : inactive sdd3[4](S) sda3[5](S)
5852270592 blocks super 1.2
md1 : active raid1 sdd2[6] sda2[7] sdc2[4]
1998784 blocks super 1.2 [4/3] [UU_U]
md0 : active raid1 sdd1[6] sda1[5] sdc1[4]
1997760 blocks super 1.2 [4/3] [UUU_]
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : inactive sdd3[4](S) sda3[5](S)
5852270592 blocks super 1.2
md1 : active raid1 sdd2[6] sda2[7] sdc2[4]
1998784 blocks super 1.2 [4/3] [UU_U]
md0 : active raid1 sdd1[6] sda1[5] sdc1[4]
1997760 blocks super 1.2 [4/3] [UUU_]
#NAS_Jun_2019
0
Comments
-
But then, i pull the wrong hdd , restore it an replace the error hdd with an new.
If you powered it up with only 2 disks, the degraded status is stored in the array headers. That will not automagically be repaired. AFAIK the only way to get the array running again is to re-create it.
Can you post the output of
<p>su</p><p><br></p><p>mdadm --examine /dev/sd[abcd]4</p><p></p>
0 -
This one...
mdadm --examine /dev/sd[abcd]3/dev/sda3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 07fd666b:a8e9b67a:b4149a5d:eeb3255eName : nas540:2Creation Time : Tue Oct 6 18:56:14 2015Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778405888 (8371.74 GiB 8989.09 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 270b9ca6:2c9e3e71:534c23e2:7a7b3a5bUpdate Time : Sun Jun 9 22:10:51 2019Checksum : 8ac939ee - correctEvents : 19157Layout : left-symmetricChunk Size : 64KDevice Role : Active device 3Array State : AA.A ('A' == active, '.' == missing)mdadm: cannot open /dev/sdb3: No such device or address/dev/sdc3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 07fd666b:a8e9b67a:b4149a5d:eeb3255eName : nas540:2Creation Time : Tue Oct 6 18:56:14 2015Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778405888 (8371.74 GiB 8989.09 GB)Data Offset : 196608 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 7a1f2349:3361d24d:807f7d6a:90491796Update Time : Sun Jun 9 22:12:53 2019Checksum : cd151306 - correctEvents : 19157Layout : left-symmetricChunk Size : 64KDevice Role : Active device 1Array State : AA.. ('A' == active, '.' == missing)/dev/sdd3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 07fd666b:a8e9b67a:b4149a5d:eeb3255eName : nas540:2Creation Time : Tue Oct 6 18:56:14 2015Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778405888 (8371.74 GiB 8989.09 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 612e6ba5:3be4635c:18c0cdf2:f1004f3aUpdate Time : Sun Jun 9 22:12:53 2019Checksum : 647c9cec - correctEvents : 19157Layout : left-symmetricChunk Size : 64KDevice Role : Active device 0Array State : AA.A ('A' == active, '.' == missing)0 -
What is with sdb2 and sdb3?
cat /proc/partitions
major minor #blocks name
7 0 147456 loop0
31 0 256 mtdblock0
31 1 512 mtdblock1
31 2 256 mtdblock2
31 3 10240 mtdblock3
31 4 10240 mtdblock4
31 5 112640 mtdblock5
31 6 10240 mtdblock6
31 7 112640 mtdblock7
31 8 6144 mtdblock8
8 0 2930266584 sda
8 1 1998848 sda1
8 2 1999872 sda2
8 3 2926266368 sda3
8 16 2930266584 sdb
8 17 2930265088 sdb1
8 32 2930233816 sdc
8 33 1998848 sdc1
8 34 1999872 sdc2
8 35 2926233600 sdc3
8 48 2930266584 sdd
8 49 1998848 sdd1
8 50 1999872 sdd2
8 51 2926266368 sdd3
31 9 102424 mtdblock9
9 0 1997760 md0
9 1 1998784 md1
31 10 4464 mtdblock10
0 -
and when put the ....
mdadm --examine /dev/sd[abcd]1mdadm: No md superblock detected on /dev/sdb1.
/dev/sda1:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 4475e358:25154506:7edcedde:c56c1f56Name : nas540:0Creation Time : Tue Oct 6 18:56:12 2015Raid Level : raid1Raid Devices : 4Avail Dev Size : 3995648 (1951.33 MiB 2045.77 MB)Array Size : 1997760 (1951.27 MiB 2045.71 MB)Used Dev Size : 3995520 (1951.27 MiB 2045.71 MB)Data Offset : 2048 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 6401fac5:89f8495d:d231f77b:797c06c9Update Time : Mon Jun 10 09:18:07 2019Checksum : be9e7515 - correctEvents : 828Device Role : Active device 2Array State : AAA. ('A' == active, '.' == missing)mdadm: No md superblock detected on /dev/sdb1./dev/sdc1:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 4475e358:25154506:7edcedde:c56c1f56Name : nas540:0Creation Time : Tue Oct 6 18:56:12 2015Raid Level : raid1Raid Devices : 4Avail Dev Size : 3995648 (1951.33 MiB 2045.77 MB)Array Size : 1997760 (1951.27 MiB 2045.71 MB)Used Dev Size : 3995520 (1951.27 MiB 2045.71 MB)Data Offset : 2048 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : d440aebc:81d1648b:f065fdad:e32fbe1aUpdate Time : Mon Jun 10 09:18:07 2019Checksum : 672b7504 - correctEvents : 828Device Role : Active device 1Array State : AAA. ('A' == active, '.' == missing)/dev/sdd1:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 4475e358:25154506:7edcedde:c56c1f56Name : nas540:0Creation Time : Tue Oct 6 18:56:12 2015Raid Level : raid1Raid Devices : 4Avail Dev Size : 3995648 (1951.33 MiB 2045.77 MB)Array Size : 1997760 (1951.27 MiB 2045.71 MB)Used Dev Size : 3995520 (1951.27 MiB 2045.71 MB)Data Offset : 2048 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 28d0324e:0dbc342d:02eb5d75:968bda21Update Time : Mon Jun 10 09:18:07 2019Checksum : 68fccfaa - correctEvents : 828Device Role : Active device 0Array State : AAA. ('A' == active, '.' == missing)~ # mdadm: No md superblock detected on /dev/sdb1.
0 -
Don't know how you managed to get that. sda3 and sdb3 agree they are part of a degraded array (Array State : AA.A ('A' == active, '.' == missing)), but sdc3 thinks it's down. (Array State : AA.. ('A' == active, '.' == missing)). I can't think of a scenario to get that.
Anyway, seeing the roles of the different partitions, the command to re-create the array ismdadm --stop /dev/md2<br><div>mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sdd3 /dev/sdc3 missing /dev/sda3</div>
What is with sdb2 and sdb3?
sdb is your new disk, and it appears to have a single, disk spanning partition. That will be changed if you add it to the array.
0 -
hi, time for the nas....
good news...md2 is aktive
bad news volume down in the gui and the beeper is on.cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md2 : active raid5 sda3[0] sdd3[3] sdc3[2] 8778307008 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [U_UU] md1 : active raid1 sdd2[6] sda2[7] sdc2[4] 1998784 blocks super 1.2 [4/3] [UU_U] md0 : active raid1 sdd1[6] sda1[5] sdc1[4] 1997760 blocks super 1.2 [4/3] [UUU_]
the array don´t want to rebuilding...mdadm --examine /dev/sd[abcd]3/dev/sda3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1cName : NAS540:2 (local to host NAS540)Creation Time : Wed Jun 12 15:37:28 2019Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778307008 (8371.65 GiB 8988.99 GB)Used Dev Size : 5852204672 (2790.55 GiB 2996.33 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : d74878a0:25ea4764:cb75619f:72746b15Update Time : Wed Jun 12 16:41:22 2019Checksum : edca6223 - correctEvents : 6Layout : left-symmetricChunk Size : 64KDevice Role : Active device 0Array State : A.AA ('A' == active, '.' == missing)mdadm: cannot open /dev/sdb3: No such device or address/dev/sdc3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1cName : NAS540:2 (local to host NAS540)Creation Time : Wed Jun 12 15:37:28 2019Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852205056 (2790.55 GiB 2996.33 GB)Array Size : 8778307008 (8371.65 GiB 8988.99 GB)Used Dev Size : 5852204672 (2790.55 GiB 2996.33 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 1f2d5346:85f27cb9:cbe98656:69fb864dUpdate Time : Wed Jun 12 16:41:22 2019Checksum : d81a49c4 - correctEvents : 6Layout : left-symmetricChunk Size : 64KDevice Role : Active device 2Array State : A.AA ('A' == active, '.' == missing)/dev/sdd3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1cName : NAS540:2 (local to host NAS540)Creation Time : Wed Jun 12 15:37:28 2019Raid Level : raid5Raid Devices : 4Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)Array Size : 8778307008 (8371.65 GiB 8988.99 GB)Used Dev Size : 5852204672 (2790.55 GiB 2996.33 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : b24fd27c:124bef01:014c7c14:d3878b60Update Time : Wed Jun 12 16:41:22 2019Checksum : 2806b385 - correctEvents : 6Layout : left-symmetricChunk Size : 64KDevice Role : Active device 3Array State : A.AA ('A' == active, '.' == missing)mdadm --deatail /dev/md2mdadm: unrecognized option '--deatail'Usage: mdadm --helpfor help~ # mdadm --detail /dev/md2/dev/md2:Version : 1.2Creation Time : Wed Jun 12 15:37:28 2019Raid Level : raid5Array Size : 8778307008 (8371.65 GiB 8988.99 GB)Used Dev Size : 2926102336 (2790.55 GiB 2996.33 GB)Raid Devices : 4Total Devices : 3Persistence : Superblock is persistentUpdate Time : Wed Jun 12 16:41:22 2019State : clean, degradedActive Devices : 3Working Devices : 3Failed Devices : 0Spare Devices : 0Layout : left-symmetricChunk Size : 64KName : NAS540:2 (local to host NAS540)UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1cEvents : 6Number Major Minor RaidDevice State0 8 3 0 active sync /dev/sda31 0 0 1 removed2 8 35 2 active sync /dev/sdc33 8 51 3 active sync /dev/sdd30 -
then it goes on..mdadm --stop /dev/md2mdadm: stopped /dev/md2mdadm --assemble --force /dev/md2 /dev/sd[abcd]3mdadm: cannot open device /dev/sdb3: No such device or addressmdadm: /dev/sdb3 has no superblock - assembly aborted
I dont now want with sdb3...sha2_512 sha256_hmac null[ 31.108529] egiga0: no IPv6 routers present[ 39.747678] ADDRCONF(NETDEV_CHANGE): egiga1: link becomes ready[ 42.358836] md: md2 stopped.[ 42.410376] md: bind<sdc3>[ 42.413416] md: bind<sdd3>[ 42.416516] md: bind<sda3>[ 42.423346] md/raid:md2: device sda3 operational as raid disk 0[ 42.429309] md/raid:md2: device sdd3 operational as raid disk 3[ 42.435245] md/raid:md2: device sdc3 operational as raid disk 2[ 42.442242] md/raid:md2: allocated 4220kB[ 42.446350] md/raid:md2: raid level 5 active with 3 out of 4 devices, algorit hm 2[ 42.453872] RAID conf printout:[ 42.453878] --- level:5 rd:4 wd:3[ 42.453885] disk 0, o:1, dev:sda3[ 42.453891] disk 2, o:1, dev:sdc3[ 42.453897] disk 3, o:1, dev:sdd3[ 42.453999] md2: detected capacity change from 0 to 8988986376192[ 42.681557] md2: unknown partition table[ 43.229025] EXT4-fs (md2): Couldn't mount because of unsupported optional fea tures (4000000)[ 70.551156] bz time = 1f[ 70.553704] bz status = 3[ 70.556329] bz_timer_status = 0[ 70.559513] start buzzer[ 73.663087] bz time = 1[ 73.666651] bz status = 1[ 73.670355] bz_timer_status = 1[ 74.804812] bz time = 0[ 74.807271] bz status = 0[ 74.809914] bz_timer_status = 1[ 161.742082] md2: detected capacity change from 8988986376192 to 0[ 161.748204] md: md2 stopped.[ 161.751121] md: unbind<sda3>[ 161.788549] md: export_rdev(sda3)[ 161.791898] md: unbind<sdd3>[ 161.828544] md: export_rdev(sdd3)[ 161.831892] md: unbind<sdc3>[ 161.868579] md: export_rdev(sdc3)[ 314.814732] md: bind<sda3>[ 314.817667] md: bind<sdc3>[ 314.820618] md: bind<sdd3>[ 314.827481] md/raid:md2: device sdd3 operational as raid disk 3[ 314.833468] md/raid:md2: device sdc3 operational as raid disk 2[ 314.839417] md/raid:md2: device sda3 operational as raid disk 0[ 314.846385] md/raid:md2: allocated 4220kB[ 314.850496] md/raid:md2: raid level 5 active with 3 out of 4 devices, algorit hm 2[ 314.857998] RAID conf printout:[ 314.858003] --- level:5 rd:4 wd:3[ 314.858010] disk 0, o:1, dev:sda3[ 314.858017] disk 2, o:1, dev:sdc3[ 314.858023] disk 3, o:1, dev:sdd3[ 314.858119] md2: detected capacity change from 0 to 8988986376192[ 314.865839] md2: unknown partition table[ 975.865301] md2: detected capacity change from 8988986376192 to 0[ 975.871490] md: md2 stopped.[ 975.874401] md: unbind<sdd3>[ 975.908638] md: export_rdev(sdd3)[ 975.911994] md: unbind<sdc3>[ 975.968580] md: export_rdev(sdc3)[ 975.971938] md: unbind<sda3>[ 975.988582] md: export_rdev(sda3)~ #~ # cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]md1 : active raid1 sdd2[6] sda2[7] sdc2[4]1998784 blocks super 1.2 [4/3] [UU_U]md0 : active raid1 sdd1[6] sda1[5] sdc1[4]1997760 blocks super 1.2 [4/3] [UUU_]unused devices: <none>0 -
June 10
/dev/sda1:
<snip>
Device Role : Active device 2
/dev/sdc1:
Device Role : Active device 1
/dev/sdd1:
Device Role : Active device 0
June 12:
/dev/sda3:
Device Role : Active device 0
/dev/sdc3:
Device Role : Active device 2
/dev/sdd3:
Device Role : Active device 3You didn't use the create exactly as I specified, or you re-shuffled the disks, after creation. I specified on base of your dump on June 10: '/dev/sdd3 /dev/sdc3 missing /dev/sda3', but instead it seems '/dev/sda3 missing /dev/sdc3 /dev/sdd3' was used.BTW, now I see I made a mistake, it should be '/dev/sdd3 /dev/sdc3 /dev/sda3 missing'.
0 -
News from the nas. With
mdadm --create....
the nas didn´t start a recovery. The Partition sdb1..3 were los.
I make with dd a copy from sda partition table to sdb. Then manually add the sdb1..3 partition to the md0..md2
The nas begin to recovery. The gui shows it too. The info ist volume is down, but I didn´t see any data?
dmseg shows...[ 41.983404] md: recovery of RAID array md2[ 41.987553] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.[ 41.993403] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.[ 42.003046] md: using 128k window, over a total of 2926102336k.[ 42.009024] md: resuming recovery of md2 from checkpoint.[ 42.433866] md2: unknown partition table[ 42.953680] EXT4-fs (md2): bad geometry: block count 2194601472 exceeds size of device (2194576752 blocks)[ 95.978898] bz time = 1[ 95.981358] bz status = 1[ 95.983985] bz_timer_status = 0[ 95.987193] start buzzer[ 165.300990] bz time = 0[ 165.303450] bz status = 0[ 165.306077] bz_timer_status = 1[ 580.050138] UBI error: ubi_open_volume: cannot open device 5, volume 0, error -16[ 580.058887] UBI error: ubi_open_volume: cannot open device 5, volume 0, error -16[ 580.095206] UBI error: ubi_open_volume: cannot open device 3, volume 0, error -16[ 580.121457] UBI error: ubi_open_volume: cannot open device 3, volume 0, error -16[ 605.555375] UBI error: ubi_open_volume: cannot open device 5, volume 0, error -16[ 605.563095] UBI error: ubi_open_volume: cannot open device 5, volume 0, error -16[ 605.572758] UBI error: ubi_open_volume: cannot open device 3, volume 0, error -16[ 605.580340] UBI error: ubi_open_volume: cannot open device 3, volume 0, error
fdisk says, bad partition table backup at sdb and work with primary table...
any chance to get the data back?0 -
md2 bad geometry....
with...e2fsck -f /dev/XXX resize2fs /dev/XXX it goes right..
but the data ?0
Categories
- All Categories
- 415 Beta Program
- 2.5K Nebula
- 152 Nebula Ideas
- 101 Nebula Status and Incidents
- 5.8K Security
- 296 USG FLEX H Series
- 281 Security Ideas
- 1.5K Switch
- 77 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.5K Consumer Product
- 254 Service & License
- 396 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 87 About Community
- 76 Security Highlight