Got a problem with NAS 542 raid 5 volume down
Axberg
Posts: 14 Freshman Member
Hi!
How can I retrive my data from this NAS.
I'm a total newbie, so please bare with me.
How can I retrive my data from this NAS.
I'm a total newbie, so please bare with me.
~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdb3[1] sdc3[4] sdd3[3]
11708660736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]
md1 : active raid1 sdc2[4] sdd2[5] sdb2[1]
1998784 blocks super 1.2 [4/3] [_UUU]
md0 : active raid1 sdb1[6] sdd1[5] sdc1[4]
1997760 blocks super 1.2 [4/3] [U_UU]
~ # mdadm --examine /dev/sd[abcd]3
mdadm: cannot open /dev/sda3: No such device or address
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
Name : NAS542:2 (local to host NAS542)
Creation Time : Mon Feb 19 16:18:55 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : ad1bbcba:851fe7ce:9b4e0515:15f8029b
Update Time : Tue Jul 28 17:02:21 2020
Checksum : ec098281 - correct
Events : 18103
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : .AAA ('A' == active, '.' == missing)
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
Name : NAS542:2 (local to host NAS542)
Creation Time : Mon Feb 19 16:18:55 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 0182f1cb:4bcb13a7:0ccde221:ae77558d
Update Time : Tue Jul 28 17:02:21 2020
Checksum : d49b92a8 - correct
Events : 18103
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : .AAA ('A' == active, '.' == missing)
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
Name : NAS542:2 (local to host NAS542)
Creation Time : Mon Feb 19 16:18:55 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 6834c489:800c6cb4:67c317fc:4e3d686f
Update Time : Tue Jul 28 17:02:21 2020
Checksum : 5c0e42bf - correct
Events : 18103
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
#NAS_August_2020
0
All Replies
-
According to this data your raid array is running, yet degraded. What is the output ofcat /proc/mounts0
-
Hi Mijzelf~ # cat /proc/mountsrootfs / rootfs rw 0 0/proc /proc proc rw,relatime 0 0/sys /sys sysfs rw,relatime 0 0none /proc/bus/usb usbfs rw,relatime 0 0devpts /dev/pts devpts rw,relatime,mode=600 0 0ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0/dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordere d 0 0/dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0configfs /sys/kernel/config configfs rw,relatime 0 0
What to do now?0 -
/dev/md2 should have been mounted, but it isn't. What if you try to mount it manually?sumkdir /mnt/mountpointmount /dev/md2 /mnt/mountpointdmesg | tailIt is also possible that the data partition is not directly on /dev/md2, but in a Logical Volume. To see that, usecat /proc/partitionslvscan && lvdisplay --all0
-
Hi again
As I said I'm a total newbie to linux and any help is mostly welcome.
~ # mkdir /mnt/mountpointmkdir: can't create directory '/mnt/mountpoint': File exists~ # mount /dev/md2 /mnt/mountpointmount: unknown filesystem type 'LVM2_member'~ # dmesg | tail[ 2340.614356][ 2340.614359] ****** disk(1:0:0:0) spin down at 204061 ******[ 2341.430716][ 2341.430719] ****** disk(3:0:0:0) spin down at 204143 ******[ 2342.394420][ 2342.394423] ****** disk(2:0:0:0) spin down at 204239 ******[78376.998914][78376.998917] ****** disk(1:0:0:0 0)(HD2) awaked by lvscan (cmd: 28) ******[78390.843568][78390.843571] ****** disk(2:0:0:0 0)(HD3) awaked by lvscan (cmd: 28) ******~ # cat /proc/partitionsmajor minor #blocks name7 0 147456 loop031 0 256 mtdblock031 1 512 mtdblock131 2 256 mtdblock231 3 10240 mtdblock331 4 10240 mtdblock431 5 112640 mtdblock531 6 10240 mtdblock631 7 112640 mtdblock731 8 6144 mtdblock88 0 3907018584 sda8 16 3907018584 sdb8 17 1998848 sdb18 18 1999872 sdb28 19 3903017984 sdb38 48 3907018584 sdd8 49 1998848 sdd18 50 1999872 sdd28 51 3903017984 sdd38 32 3907018584 sdc8 33 1998848 sdc18 34 1999872 sdc28 35 3903017984 sdc331 9 102424 mtdblock99 0 1997760 md09 1 1998784 md131 10 4464 mtdblock109 2 11708660736 md2253 0 102400 dm-0253 1 11708555264 dm-1~ # lvscan && lvdisplay --allACTIVE '/dev/vg_c1b2735e/vg_info_area' [100.00 MiB] inheritACTIVE '/dev/vg_c1b2735e/lv_168e8bf4' [10.90 TiB] inherit--- Logical volume ---LV Path /dev/vg_c1b2735e/vg_info_areaLV Name vg_info_areaVG Name vg_c1b2735eLV UUID SCecoC-nAfM-B8BV-mphS-QtBe-OLsT-J2wyF4LV Write Access read/writeLV Creation host, time NAS542, 2018-02-19 16:18:57 +0100LV Status available# open 0LV Size 100.00 MiBCurrent LE 25Segments 1Allocation inheritRead ahead sectors auto- currently set to 1024Block device 253:0--- Logical volume ---LV Path /dev/vg_c1b2735e/lv_168e8bf4LV Name lv_168e8bf4VG Name vg_c1b2735eLV UUID 2If3DE-2zBN-mlC4-PDiv-PqQ5-P9VD-iwOL7SLV Write Access read/writeLV Creation host, time NAS542, 2018-02-19 16:18:57 +0100LV Status available# open i'm 0LV Size 10.90 TiBCurrent LE 2858534Segments 1Allocation inheritRead ahead sectors auto- currently set to 1024Block device 253:10 -
mount: unknown filesystem type 'LVM2_member'
OK, you indeed have a logical volume. (LVM is logical volume manager), which is confirmed by lv_display.
Your data volume is /dev/vg_c1b2735e/lv_168e8bf4, so try to mount that
mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint
dmesg | tail
0 -
~ # mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpointmount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_c1b2735e-lv_168e8bf4,missing codepage or helper program, or other errorIn some cases useful info is found in syslog - trydmesg | tail or so.~ # dmesg | taildmesg | tail[79325.182629][79325.182631] ****** disk(2:0:0:0) spin down at 7902518 ******[87365.942603][87365.942606] ****** disk(2:0:0:0 0)(HD3) awaked by mount (cmd: 88) ******[87372.805601][87372.805604] ****** disk(1:0:0:0 0)(HD2) awaked by mount (cmd: 88) ******[87386.728280][87386.728283] ****** disk(3:0:0:0 0)(HD4) awaked by mount (cmd: 88) ******[87394.376921] JBD2: no valid journal superblock found[87394.381842] EXT4-fs (dm-1): error loading journal~ #~ # dmesg | tail[79325.182629][79325.182631] ****** disk(2:0:0:0) spin down at 7902518 ******[87365.942603][87365.942606] ****** disk(2:0:0:0 0)(HD3) awaked by mount (cmd: 88) ******[87372.805601][87372.805604] ****** disk(1:0:0:0 0)(HD2) awaked by mount (cmd: 88) ******[87386.728280][87386.728283] ****** disk(3:0:0:0 0)(HD4) awaked by mount (cmd: 88) ******[87394.376921] JBD2: no valid journal superblock found[87394.381842] EXT4-fs (dm-1): error loading journal~ #~ #0
-
[87394.376921] JBD2: no valid journal superblock found[87394.381842] EXT4-fs (dm-1): error loading journalThere is a problem with the journal. Maybe e2fsck can fix that.e2fsck /dev/vg_c1b2735e/lv_168e8bf4
0 -
Hi Mijzelf
Now the journal is fixed by e2fsck, but I still hav e a problen, fault go from volume down toDisk Group is down.and RAID is degraded..
I've triedmount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpointmount: mount point /mnt/mountpoint does not exist
Can you give me a hint to fix my nas.?0 -
Rebooted in between I presume? The whole rootfilesystem of the NAS is volatile, so you'll have to repeat all changes you made after a reboot.mkdir /mnt/mountpoint0
-
HiI will start over and then I will remember to restart, as I have already said this is completely new to me, I try again and return with a result0
Categories
- All Categories
- 415 Beta Program
- 2.5K Nebula
- 152 Nebula Ideas
- 101 Nebula Status and Incidents
- 5.8K Security
- 296 USG FLEX H Series
- 281 Security Ideas
- 1.5K Switch
- 77 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.5K Consumer Product
- 254 Service & License
- 396 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 87 About Community
- 76 Security Highlight