NAS 542 raid 5 volume down

BjoWis
BjoWis Posts: 33  Freshman Member
First Comment Friend Collector
Hi,

The volume of my NAS 542 is down and I've tried to find answers here. I hope @Mijzelf can be so kind and give me some guidance since I am quite a newbie when it comes to Linux;

I have managed to get some info both from SSH and WUI, shown below;

~ #/proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]<br>md2 : inactive sdb3[1](S) sdc3[4](S) sdd3[2](S)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8778405888 blocks super 1.2<br><br>md1 : active raid1 sdb2[6] sdc2[4] sdd2[2]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1998784 blocks super 1.2 [4/3] [U_UU]<br><br>md0 : active raid1 sdc1[6] sdb1[5] sdd1[2]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1997760 blocks super 1.2 [4/3] [_UUU]<br><br>unused devices: <none><br>

~ # mdadm --examine /dev/sd[abcd]3<br>mdadm: cannot open /dev/sda3: No such device or address<br>/dev/sdb3:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Magic : a92b4efc<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Version : 1.2<br>&nbsp;&nbsp;&nbsp; Feature Map : 0x0<br>&nbsp;&nbsp;&nbsp;&nbsp; Array UUID : 1c9c1be5:13bd011d:0efaf352:d48ed21d<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Name : NAS540:2<br>&nbsp; Creation Time : Tue Apr 28 15:12:52 2015<br>&nbsp;&nbsp;&nbsp;&nbsp; Raid Level : raid5<br>&nbsp;&nbsp; Raid Devices : 4<br><br>&nbsp;Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)<br>&nbsp;&nbsp;&nbsp;&nbsp; Array Size : 8778405888 (8371.74 GiB 8989.09 GB)<br>&nbsp;&nbsp;&nbsp; Data Offset : 262144 sectors<br>&nbsp;&nbsp; Super Offset : 8 sectors<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; State : clean<br>&nbsp;&nbsp;&nbsp; Device UUID : 873120e2:49f83f72:d3287c92:a4584865<br><br>&nbsp;&nbsp;&nbsp; Update Time : Sun Oct&nbsp; 2 00:41:14 2022<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Checksum : 41afc283 - correct<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Events : 716367<br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Layout : left-symmetric<br>&nbsp;&nbsp;&nbsp;&nbsp; Chunk Size : 64K<br><br>&nbsp;&nbsp; Device Role : Active device 1<br>&nbsp;&nbsp; Array State : .AA. ('A' == active, '.' == missing)<br>/dev/sdc3:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Magic : a92b4efc<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Version : 1.2<br>&nbsp;&nbsp;&nbsp; Feature Map : 0x2<br>&nbsp;&nbsp;&nbsp;&nbsp; Array UUID : 1c9c1be5:13bd011d:0efaf352:d48ed21d<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Name : NAS540:2<br>&nbsp; Creation Time : Tue Apr 28 15:12:52 2015<br>&nbsp;&nbsp;&nbsp;&nbsp; Raid Level : raid5<br>&nbsp;&nbsp; Raid Devices : 4<br><br>&nbsp;Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)<br>&nbsp;&nbsp;&nbsp;&nbsp; Array Size : 8778405888 (8371.74 GiB 8989.09 GB)<br>&nbsp;&nbsp;&nbsp; Data Offset : 262144 sectors<br>&nbsp;&nbsp; Super Offset : 8 sectors<br>Recovery Offset : 2667904416 sectors<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; State : clean<br>&nbsp;&nbsp;&nbsp; Device UUID : 2793e957:799fc329:e7c09b5e:6c9d0fe2<br><br>&nbsp;&nbsp;&nbsp; Update Time : Fri Sep 30 13:40:47 2022<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Checksum : 56e4b972 - correct<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Events : 716337<br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Layout : left-symmetric<br>&nbsp;&nbsp;&nbsp;&nbsp; Chunk Size : 64K<br><br>&nbsp;&nbsp; Device Role : Active device 3<br>&nbsp;&nbsp; Array State : AAAA ('A' == active, '.' == missing)<br>/dev/sdd3:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Magic : a92b4efc<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Version : 1.2<br>&nbsp;&nbsp;&nbsp; Feature Map : 0x0<br>&nbsp;&nbsp;&nbsp;&nbsp; Array UUID : 1c9c1be5:13bd011d:0efaf352:d48ed21d<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Name : NAS540:2<br>&nbsp; Creation Time : Tue Apr 28 15:12:52 2015<br>&nbsp;&nbsp;&nbsp;&nbsp; Raid Level : raid5<br>&nbsp;&nbsp; Raid Devices : 4<br><br>&nbsp;Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)<br>&nbsp;&nbsp;&nbsp;&nbsp; Array Size : 8778405888 (8371.74 GiB 8989.09 GB)<br>&nbsp;&nbsp;&nbsp; Data Offset : 262144 sectors<br>&nbsp;&nbsp; Super Offset : 8 sectors<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; State : clean<br>&nbsp;&nbsp;&nbsp; Device UUID : 102055eb:136ffba1:cc73cc11:2c388508<br><br>&nbsp;&nbsp;&nbsp; Update Time : Sun Oct&nbsp; 2 00:41:14 2022<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Checksum : 9d2d5257 - correct<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Events : 716367<br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Layout : left-symmetric<br>&nbsp;&nbsp;&nbsp;&nbsp; Chunk Size : 64K<br><br>&nbsp;&nbsp; Device Role : Active device 2<br>&nbsp;&nbsp; Array State : .AA. ('A' == active, '.' == missing)

The RAID was created in my old NAS540 some years ago. I bought a NAS542 last week and moved the disks to that one. Once it was booted I got an error saying that the RAID was degraded. I tried to repair the RAID via the Storage Manager in the WUI. But when the synching was done, I couldn't log in, so I rebooted the NAS and now the error says "Volume Down" :(







«1345

All Replies

  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    According to your screenshots disk1 is dead. According to the mdadm dump your partition sdd3 was dropped from the array after Fri Sep 30 13:40:47 2022, and at that time all 4 disks were working, while the remaining two disks were last updated at Sun Oct  2 00:41:14 2022. As an array which is down is no longer updated, I guess that was the moment disk1 died.
    So after sdd was dropped from the array the degraded array has lived around 36 hours. I hope you didn't do a lot of writing to the array that time. The array can probably be restored, but 33% of the volume will have the situation of last Friday, while 66% will have the situation of Saturday night. That can cause filesystem problems.

    Anyway, seeing the headers, you can recreate the array using:

    su
    mdadm --stop /dev/md2
    mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 missing /dev/sdb3 /dev/sdd3 /dev/sdc3 && e2fsck /dev/md2

    That are 3 lines, the latter two starting with mdadm. The 2nd line can give an error, as it's not clear to me if /dev/md2 is 'active' or not.
    The 3th line creates a new, degraded array, and when the succeeds it does a filesystem check. (&& e2fsck /dev/md2). I wrote it this way because the firmware can dive in, and mount the filesystem shortly after assembling the array, making the check difficult. If e2fsck complains that the filesystem is already mounted, this isn't fast enough, and you'll have to check in the webinterface.

    The box might start beeping, as there is a degraded array now.

  • BjoWis
    BjoWis Posts: 33  Freshman Member
    First Comment Friend Collector
    Ok, here's the result;

    login as: admin
    admin@192.168.1.157's password:


    BusyBox v1.19.4 (2022-08-11 15:13:21 CST) built-in shell (ash)
    Enter 'help' for a list of built-in commands.

    ~ $ su
    Password:


    BusyBox v1.19.4 (2022-08-11 15:13:21 CST) built-in shell (ash)
    Enter 'help' for a list of built-in commands.

    ~ # mdadm --stop /dev/md2
    mdadm: stopped /dev/md2
    ~ # mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 missing /dev/sdb3 /dev/sdd3 /dev/sdc3 && e2fsck /dev/md2
    mdadm: /dev/sdb3 appears to be part of a raid array:
        level=raid5 devices=4 ctime=Tue Apr 28 15:12:52 2015
    mdadm: /dev/sdd3 appears to be part of a raid array:
        level=raid5 devices=4 ctime=Tue Apr 28 15:12:52 2015
    mdadm: /dev/sdc3 appears to be part of a raid array:
        level=raid5 devices=4 ctime=Tue Apr 28 15:12:52 2015
    Continue creating array? y
    mdadm: array /dev/md2 started.
    e2fsck 1.42.12 (29-Aug-2014)
    /dev/md2: recovering journal
    The filesystem size (according to the superblock) is 2194601472 blocks
    The physical size of the device is 2194601328 blocks
    Either the superblock or the partition table is likely to be corrupt!
    Abort<y>? no
    /dev/md2 contains a file system with errors, check forced.
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Block bitmap differences:  -(1355815009--1355815012) -(1355815016--1355815018) -(1355815022--1355815024) -1355815026 -(1355815029--1355815030) -1355815032 -(1355815034--1355815037) -(1355815041--1355815044) -(1355815048--1355815050) -(1355815054--1355815056) -1355815058 -(1355815061--1355815062) -1355815064 -(1355815066--1355815069) -1355815203 -1355815970 -(1355815973--1355815974) -(1355815977--1355815978) -1355815980 -(1355815982--1355815983) -1355815986 -1355815988 -1355815990 -1355815992 -1355815994 -1355815996 -1355815999 -1355816002 -(1355816005--1355816006) -(1355816009--1355816010) -1355816012 -(1355816014--1355816015) -1355816018 -1355816020 -1355816022 -1355816024 -1355816026 -1355816028 -1355816031 -1355816978 -1355817026 -(1355817028--1355817030) -1355817032 -(1355817034--1355817035) -1355817037 -(1355817040--1355817043) -1355817045 -(1355817047--1355817048) -1355817051 -(1355817057--1355817060) -(1355817064--1355817066) -(1355817070--1355817072) -1355817074 -(1355817077--1355817078) -1355817080 -(1355817082--1355817085) -1355817090 -(1355817092--1355817094) -1355817096 -(1355817098--1355817099) -1355817101 -(1355817104--1355817107) -1355817109 -(1355817111--1355817112) -1355817115 -(1355817440--1355817441) -1355817444 -1355817446 -1355817448 -(1355817760--1355817761) -1355817765 -1355817768 -1355817771 -(1355817774--1355817777) -(1355817780--1355817783) -1355817786 -(1355817788--1355817789) -1355818018 -(1355818021--1355818022) -(1355818025--1355818026) -1355818028 -(1355818030--1355818031) -1355818034 -1355818036 -1355818038 -1355818040 -1355818042 -1355818044 -1355818047 -(1355818113--1355818116) -(1355818120--1355818122) -(1355818126--1355818128) -1355818130 -(1355818133--1355818134) -1355818136 -(1355818138--1355818141) -1355818146 -(1355818149--1355818150) -(1355818154--1355818155) -(1355818157--1355818158) -(1355818160--1355818162) -(1355818164--1355818165) -(1355818167--1355818169) -1355818171 -1355818173 -1355818175 -1355819026 -(1355819105--1355819108) -(1355819112--1355819114) -(1355819118--1355819120) -1355819122 -(1355819125--1355819126) -1355819128 -(1355819130--1355819133) -(1355819488--1355819489) -(1355819494--1355819495) -1355819808 -1355819813 -1355819816 -1355819819 -(1355819822--1355819825) -(1355819828--1355819831) -1355819834 -(1355819836--1355819837) -1355820066 -(1355820069--1355820070) -(1355820073--1355820074) -1355820076 -(1355820078--1355820079) -1355820082 -1355820084 -1355820086 -1355820088 -1355820090 -1355820092 -1355820095 -(1355820161--1355820164) -(1355820168--1355820170) -(1355820174--1355820176) -1355820178 -(1355820181--1355820182) -1355820184 -(1355820186--1355820189) -1355820194 -(1355820197--1355820198) -(1355820201--1355820202) -1355820204 -(1355820206--1355820207) -1355820210 -1355820212 -1355820214 -1355820216 -1355820218 -1355820220 -1355820223 -1355821074 -(1355821153--1355821156) -(1355821160--1355821162) -(1355821166--1355821168) -1355821170 -(1355821173--1355821174) -1355821176 -(1355821178--1355821181) -(1355821536--1355821539) -(1355821542--1355821543) -(1355821856--1355821858) -1355821861 -1355821864 -1355821867 -(1355821870--1355821873) -(1355821876--1355821879) -1355821882 -(1355821884--1355821885) -1355822114 -(1355822117--1355822118) -(1355822121--1355822122) -1355822124 -(1355822126--1355822127) -1355822130 -1355822132 -1355822134 -1355822136 -1355822138 -1355822140 -1355822143 -(1355822209--1355822212) -(1355822216--1355822218) -(1355822222--1355822224) -1355822226 -(1355822229--1355822230) -1355822232 -(1355822234--1355822237) -1355822242 -(1355822245--1355822246) -(1355822249--1355822250) -1355822252 -(1355822254--1355822255) -1355822258 -1355822260 -1355822262 -1355822264 -1355822266 -1355822268 -1355822271 -1355823122 -(1355823201--1355823204) -(1355823208--1355823210) -(1355823214--1355823216) -1355823218 -(1355823221--1355823222) -1355823224 -(1355823226--1355823229) -(1355823233--1355823236) -(1355823240--1355823242) -(1355823246--1355823248) -1355823250 -(1355823253--1355823254) -1355823256 -(1355823258--1355823261) -1355823393 -(1355823395--1355823397) -1355824162 -(1355824165--1355824166) -1355824170 -1355824177 -1355824181 -1355824183 -1355824187 -(1355824189--1355824191) -1355824194 -(1355824197--1355824198) -1355824202 -1355824209 -1355824213 -1355824215 -1355824219 -(1355824221--1355824223) -1355825170 -(1355825249--1355825252) -(1355825256--1355825258) -(1355825262--1355825264) -1355825266 -(1355825269--1355825270) -1355825272 -(1355825274--1355825277) -(1355825634--1355825635) -(1355825637--1355825638) -(1355825640--1355825641) -1355825952 -1355825957 -1355825960 -1355825963 -(1355825966--1355825969) -(1355825972--1355825975) -1355825978 -(1355825980--1355825981) -1355826210 -(1355826213--1355826214) -(1355826217--1355826218) -1355826220 -(1355826222--1355826223) -1355826226 -1355826228 -1355826230 -1355826232 -1355826234 -1355826236 -1355826239 -(1355826305--1355826308) -(1355826312--1355826314) -(1355826318--1355826320) -1355826322 -(1355826325--1355826326) -1355826328 -(1355826330--1355826333) -1355826338 -(1355826341--1355826342) -(1355826345--1355826346) -1355826348 -(1355826350--1355826351) -1355826354 -1355826356 -1355826358 -1355826360 -1355826362 -1355826364 -1355826367 -1355827218 -(1355827297--1355827300) -(1355827304--1355827306) -(1355827310--1355827312) -1355827314 -(1355827317--1355827318) -1355827320 -(1355827322--1355827325) -(1355827680--1355827683) -(1355827685--1355827686) -(1355827688--1355827689) -1355828000 -(1355828002--1355828005) -1355828008 -1355828011 -(1355828014--1355828017) -(1355828020--1355828023) -1355828026 -(1355828028--1355828029) -1355828258 -(1355828261--1355828262) -1355828269 -1355828274 -(1355828276--1355828278) -1355828281 -(1355828283--1355828284) -1355828287 -(1355828353--1355828356) -(1355828360--1355828362) -(1355828366--1355828368) -1355828370 -(1355828373--1355828374) -1355828376 -(1355828378--1355828381) -1355828386 -(1355828389--1355828390) -1355828397 -1355828402 -(1355828404--1355828406) -1355828409 -(1355828411--1355828412) -1355828415 -1355829266 -(1355829345--1355829348) -(1355829352--1355829354) -(1355829358--1355829360) -1355829362 -(1355829365--1355829366) -1355829368 -(1355829370--1355829373) -(1355829728--1355829730) -1355829732 -1355829737 -1355830048 -1355830053 -1355830056 -1355830059 -(1355830062--1355830065) -(1355830068--1355830071) -1355830074 -(1355830076--1355830077) -1355830306 -(1355830309--1355830310) -1355830314 -(1355830316--1355830320) -(1355830324--1355830325) -(1355830327--1355830328) -(1355830330--1355830332) -1355830335 -(1355830401--1355830404) -(1355830408--1355830410) -(1355830414--1355830416) -1355830418 -(1355830421--1355830422) -1355830424 -(1355830426--1355830429) -1355830434 -(1355830437--1355830438) -1355830442 -(1355830444--1355830448) -(1355830452--1355830453) -(1355830455--1355830456) -(1355830458--1355830460) -1355830463 -1355831314 -(1355831393--1355831396) -(1355831400--1355831402) -(1355831406--1355831408) -1355831410 -(1355831413--1355831414) -1355831416 -(1355831418--1355831421) -(1355831779--1355831780) -1355831785 -(1355832097--1355832098) -1355832101 -1355832104 -1355832107 -(1355832110--1355832113) -(1355832116--1355832119) -1355832122 -(1355832124--1355832125) -1355832354 -(1355832357--1355832358) -1355832361 -(1355832363--1355832367) -(1355832369--1355832370) -1355832372 -(1355832374--1355832375) -1355832378 -(1355832381--1355832383) -(1355832449--1355832452) -(1355832456--1355832458) -(1355832462--1355832464) -1355832466 -(1355832469--1355832470) -1355832472 -(1355832474--1355832477) -1355832482 -(1355832485--1355832486) -1355832489 -(1355832491--1355832495) -(1355832497--1355832498) -1355832500 -(1355832502--1355832503) -1355832506 -(1355832509--1355832511) -1355833362 -(1355833441--1355833444) -(1355833448--1355833450) -(1355833454--1355833456) -1355833458 -(1355833461--1355833462) -1355833464 -(1355833466--1355833469) -1355833824 -1355833826 -(1355833831--1355833832) -1355833834 -(1355834145--1355834147) -1355834149 -1355834152 -1355834155 -(1355834158--1355834161) -(1355834164--1355834167) -1355834170 -(1355834172--1355834173) -1355834402 -(1355834405--1355834406) -(1355834411--1355834413) -1355834415 -1355834417 -1355834421 -(1355834426--1355834427) -(1355834429--1355834431) -(1355834497--1355834500) -(1355834504--1355834506) -(1355834510--1355834512) -1355834514 -(1355834517--1355834518) -1355834520 -(1355834522--1355834525) -1355834530 -(1355834533--1355834534) -1355834537 -(1355834541--1355834542) -1355834550 -(1355834553--1355834555) -(1355834557--1355834559) -1355835410 -(1355835489--1355835492) -(1355835496--1355835498) -(1355835502--1355835504) -1355835506 -(1355835509--1355835510) -1355835512 -(1355835514--1355835517) -(1355835873--1355835876) -1355835878 -(1355835880--1355835881) -1355836194 -1355836196 -1355836200 -1355836203 -(1355836206--1355836209) -(1355836212--1355836215) -1355836218 -(1355836220--1355836221) -1355836450 -(1355836453--1355836454) -(1355836458--1355836460) -1355836463 -(1355836466--1355836468) -(1355836470--1355836471) -(1355836473--1355836479) -(1355836545--1355836548) -(1355836552--1355836554) -(1355836558--1355836560) -1355836562 -(1355836565--1355836566) -1355836568 -(1355836570--1355836573) -1355836578 -(1355836581--1355836582) -(1355836586--1355836588) -1355836591 -(1355836594--1355836596) -(1355836598--1355836599) -(1355836601--1355836607) -1355837458 -(1355837537--1355837540) -(1355837544--1355837546) -(1355837550--1355837552) -1355837554 -(1355837557--1355837558) -1355837560 -(1355837562--1355837565) -1355837921 -1355837923 -1355837928 -1355838240 -1355838243 -1355838245 -1355838248 -1355838251 -(1355838254--1355838257) -(1355838260--1355838263) -1355838266 -(1355838268--1355838269) -1355838498 -(1355838501--1355838502) -(1355838506--1355838507) -(1355838509--1355838515) -(1355838517--1355838521) -(1355838525--1355838527) -(1355838593--1355838596) -(1355838600--1355838602) -(1355838606--1355838608) -1355838610 -(1355838613--1355838614) -1355838616 -(1355838618--1355838621) -1355838626 -(1355838629--1355838630) -(1355838633--1355838634) -1355838638 -1355838641 -1355838643 -1355838646 -1355838648 -1355838650 -(1355838653--1355838655) -1355839506 -(1355839585--1355839588) -(1355839592--1355839594) -(1355839598--1355839600) -1355839602 -(1355839605--1355839606) -1355839608 -(1355839610--1355839613) -1355839970 -1355839972 -(1355839974--1355839975) -(1355840288--1355840289) -(1355840291--1355840293) -1355840296 -1355840299 -(1355840302--1355840305) -(1355840308--1355840311) -1355840314 -(1355840316--1355840317) -1355840546 -(1355840549--1355840550) -1355840554 -1355840556 -1355840558 -1355840560 -(1355840564--1355840566) -1355840569 -(1355840571--1355840572) -1355840575 -(1355840641--1355840644) -(1355840648--1355840650) -(1355840654--1355840656) -1355840658 -(1355840661--1355840662) -1355840664 -(1355840666--1355840669) -1355840674 -(1355840677--1355840678) -(1355840681--1355840683) -1355840685 -(1355840688--1355840689) -1355840692 -(1355840694--1355840700) -1355840703 -1355843083 -(1355843136--1355843137) -(1355843140--1355843141) -1355843150 -(1355843152--1355843155) -1355843157 -1355844906 -1355844908 -1355844963 -(1355844972--1355844975) +(1355845648--1355845655) +(1355845688--1355845693) +(1355845720--1355845723)
    Fix<y>? yes
    Free blocks count wrong (1098096372, counted=1098094509).
    Fix<y>? yes
    Free inodes count wrong (274230172, counted=274228426).
    Fix<y>? yes

    /dev/md2: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/md2: 97078/274325504 files (3.5% non-contiguous), 1096506963/2194601472 blocks
    ~ #

    and when I run the command
    mdadm --examine /dev/sd[abcd]3

    ~ # mdadm --examine /dev/sd[abcd]3
    mdadm: cannot open /dev/sda3: No such device or address
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : f7c083fd:fe37f383:55424937:52ec4bd2

        Update Time : Wed Oct  5 15:14:25 2022
           Checksum : 4758829c - correct
             Events : 16

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 3e046eeb:3bac7e38:2c4e2408:d05d7a1d

        Update Time : Wed Oct  5 15:14:25 2022
           Checksum : 5d7b776 - correct
             Events : 16

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 803b4d20:57570076:2d7c62d1:e4bf567a

        Update Time : Wed Oct  5 15:14:25 2022
           Checksum : 9e5329e8 - correct
             Events : 16

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AAA ('A' == active, '.' == missing)
    ~ #
    What is the next step? In the WUI I see a crashed Volume :anguished:  Will it still be possible to recover the data?



  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    According to e2fsck the filesystem is clean, and contains 97078 files occupying about 4TB. So maybe the firmware tried to boot the volume while e2fsck had locked it, and is it now thinking the volume is unmountable. Just reboot the box.
  • BjoWis
    BjoWis Posts: 33  Freshman Member
    First Comment Friend Collector
    Hmm, after rebooting the box I still can't reach any data. When I log on the WUI I get welcomed by this splash screen;


    and when I open the Storage Manager;




    BusyBox v1.19.4 (2022-08-11 15:13:21 CST) built-in shell (ash)
    Enter 'help' for a list of built-in commands.

    ~ $ cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid5 sdb3[1] sdc3[3] sdd3[2]
          8778405312 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]
          
    md1 : active raid1 sdb2[6] sdc2[4] sdd2[2]
          1998784 blocks super 1.2 [4/3] [U_UU]
          
    md0 : active raid1 sdc1[6] sdb1[5] sdd1[2]
          1997760 blocks super 1.2 [4/3] [_UUU]
          
    unused devices: <none>
    ~ $ su
    Password:


    BusyBox v1.19.4 (2022-08-11 15:13:21 CST) built-in shell (ash)
    Enter 'help' for a list of built-in commands.

    ~ # mdadm --examine /dev/sd[abcd]3
    mdadm: cannot open /dev/sda3: No such device or address
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : f7c083fd:fe37f383:55424937:52ec4bd2

        Update Time : Thu Oct  6 06:47:08 2022
           Checksum : 47595d3b - correct
             Events : 20

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 3e046eeb:3bac7e38:2c4e2408:d05d7a1d

        Update Time : Thu Oct  6 06:47:08 2022
           Checksum : 5d89215 - correct
             Events : 20

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 803b4d20:57570076:2d7c62d1:e4bf567a

        Update Time : Thu Oct  6 06:47:08 2022
           Checksum : 9e540487 - correct
             Events : 20

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AAA ('A' == active, '.' == missing)
    ~ #


    What's possible from here?


  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Can you mount the filesystem manually?

    su
    mkdir -p /tmp/mountpoint
    mount /dev/md2 /tmp/mountpoint

    And if that fails, what are the last lines of 'dmesg'? Look at timestamps to see where the relevant part ends.
  • BjoWis
    BjoWis Posts: 33  Freshman Member
    First Comment Friend Collector
    Tried that now, but it failed.

    ~ # mkdir -p /tmp/mountpoint
    ~ # mount /dev/md2 /tmp/mountpoint
    mount: wrong fs type, bad option, bad superblock on /dev/md2,
           missing codepage or helper program, or other error

           In some cases useful info is found in syslog - try
           dmesg | tail or so.

    last row of the 'dmesg'
    [  419.860163] EXT4-fs (md2): bad geometry: block count 2194601472 exceeds size of device (2194601328 blocks)
    The entire dmesg log is attached.


  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Ah right. For some reason the new array is slightly smaller than the old one was. Don't know why.

      Creation Time : Tue Apr 28 15:12:52 2015
         Array Size : 8778405888 (8371.74 GiB 8989.09 GB)

      Creation Time : Wed Oct  5 13:17:25 2022
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)     

    Yet the filesystem didn't change, which causes this error. So try

    resize2fs /dev/md2

    followed by a mount. If that doesn't work, we'll have to force the array to the original size. (Don't know how, yet.)

  • BjoWis
    BjoWis Posts: 33  Freshman Member
    First Comment Friend Collector
    Ok, here's the result
    ~ # resize2fs /dev/md2
    resize2fs 1.42.12 (29-Aug-2014)
    The filesystem can be resize to 2194601328 blocks.chk_expansible=0
    Resizing the filesystem on /dev/md2 to 2194601328 (4k) blocks.
    and around 35 minutes later...
    resize2fs: Can't read a block bitmap while trying to resize /dev/md2
    Please run 'e2fsck -fy /dev/md2' to fix the filesystem after the aborted resize operation.
    Tried that...
    ~ # e2fsck -fy /dev/md2
    e2fsck 1.42.12 (29-Aug-2014)
    e2fsck: Attempt to read block from filesystem resulted in short read while trying to open /dev/md2
    Could this be a zero-length partition?
    I've rebooted the box.


    This is starting to worry me, cuz I found this when checking the disks with S.M.A.R.T



    So if disk 1 has crashed and disk 3 is "Bad", things might be a bit trickier?
    I've attached my last log from Putty as well.
  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Ouch! 52552 reallocated sectors! That is bad.
    So if disk 1 has crashed and disk 3 is "Bad", things might be a bit trickier?
    Definitely. You can try to make a bit-by-bit copy of disk 3 to a new disk using ddrescue. That copy will have all soft-errors, but at least e2fsck can repair them without killing the disk further. (And without having to deal with new errors arising while looking at it). But of course it is possible the disk dies while copying.
    You can also have a look if disk 1 in another slot looks as bad. If only 2 disks are left, the data is lost.

  • BjoWis
    BjoWis Posts: 33  Freshman Member
    First Comment Friend Collector
    That was pretty bad news, as I thought... well I have now ordered a new 3TB disk from Amazon, it will arrive tonight.

    Can I run ddrescue with the disks mounted in the NAS? (Replacing disk 1 with the new one and then run ddrescue from SSH?)

    I've also tried to change slot of disk 1 but it's still only 3 GB so I guess ddrescue-ing that one is not an option?


Consumer Product Help Center