Got a problem with NAS 542 raid 5 volume down
All Replies
-
Hi
Maybe I'm just to stupid to understand, wh.en I make the directory and then mount, then I make a reboot and I got the same 2 failure as before~ # mkdir /mnt/mountpoint~ # mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint0 -
Tried this after reboot:~ # mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpointmount: mount point /mnt/mountpoint does not exist~ #~ # e2fsck /dev/vg_c1b2735e/lv_168e8bf4e2fsck 1.42.12 (29-Aug-2014)/dev/vg_c1b2735e/lv_168e8bf4 is mounted.e2fsck: Cannot continue, aborting.0
-
/dev/vg_c1b2735e/lv_168e8bf4 is mounted.So it's already mounted. That is not surprising, as that is the default situation. The only reason that is was not mounted, was because of the error, which you repaired.Have you already checked if your shares are back?If not, you can find the current mountpoint usingcat /proc/mountsand look into the filesystem usingls -l <mountpoint>where you have to substitute the 'real' mountpoint.0
-
~ # cat /proc/mountsrootfs / rootfs rw 0 0/proc /proc proc rw,relatime 0 0/sys /sys sysfs rw,relatime 0 0none /proc/bus/usb usbfs rw,relatime 0 0devpts /dev/pts devpts rw,relatime,mode=600 0 0ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0/dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0/dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0/dev/mapper/vg_c1b2735e-lv_168e8bf4 /i-data/168e8bf4 ext4 rw,noatime,user_xattr,barrier=1,stripe=48,data=ordered,usrquota 0 0configfs /sys/kernel/config configfs rw,relatime 0 0~ # ls -l <mountpoint>sh: syntax error: unexpected newline0
-
Hi
Here are the new~ # cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]md2 : active raid5 sdb3[1] sdc3[4] sdd3[3]11708660736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]md1 : active raid1 sdc2[4] sdd2[5] sdb2[1]1998784 blocks super 1.2 [4/3] [_UUU]md0 : active raid1 sdb1[6] sdd1[5] sdc1[4]1997760 blocks super 1.2 [4/3] [U_UU]unused devices: <none>~ # mdadm --examine /dev/sd[abcd]3mdadm: cannot open /dev/sda3: No such device or address/dev/sdb3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6Name : NAS542:2 (local to host NAS542)Creation Time : Mon Feb 19 16:18:55 2018Raid Level : raid5Raid Devices : 4Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)Array Size : 11708660736 (11166.25 GiB 11989.67 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : ad1bbcba:851fe7ce:9b4e0515:15f8029bUpdate Time : Mon Aug 3 18:44:22 2020Checksum : ec118de4 - correctEvents : 20785Layout : left-symmetricChunk Size : 64KDevice Role : Active device 1Array State : .AAA ('A' == active, '.' == missing)/dev/sdc3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6Name : NAS542:2 (local to host NAS542)Creation Time : Mon Feb 19 16:18:55 2018Raid Level : raid5Raid Devices : 4Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)Array Size : 11708660736 (11166.25 GiB 11989.67 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 0182f1cb:4bcb13a7:0ccde221:ae77558dUpdate Time : Mon Aug 3 18:44:22 2020Checksum : d4a39e0b - correctEvents : 20785Layout : left-symmetricChunk Size : 64KDevice Role : Active device 3Array State : .AAA ('A' == active, '.' == missing)/dev/sdd3:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6Name : NAS542:2 (local to host NAS542)Creation Time : Mon Feb 19 16:18:55 2018Raid Level : raid5Raid Devices : 4Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)Array Size : 11708660736 (11166.25 GiB 11989.67 GB)Data Offset : 262144 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : 6834c489:800c6cb4:67c317fc:4e3d686fUpdate Time : Mon Aug 3 18:44:22 2020Checksum : 5c164e22 - correctEvents : 20785Layout : left-symmetricChunk Size : 64KDevice Role : Active device 2Array State : .AAA ('A' == active, '.' == missing)~ #I am very grateful for your attempt to help me0 -
/dev/mapper/vg_c1b2735e-lv_168e8bf4 /i-data/168e8bf4 ext4
The filesystem is mounted on /i-data/168e8bf4, so the view command should be
ls -l /i-data/168e8bf4
0 -
Hi again
Got this, but still have disk group down and raid degraded.~ # ls -l /i-data/168e8bf4drwxrwxrwx 4 root root 4096 Feb 20 2018 admin-rw------- 1 root root 9216 Feb 19 2018 aquota.userdrwx------ 2 root root 16384 Feb 19 2018 lost+founddrwxrwxrwx 2 root root 4096 Apr 17 04:38 musicdrwxrwxrwx 2 root root 4096 Jan 12 2020 photodrwxrwxrwx 510 root root 20480 May 7 16:30 video~ #How to go from here?
I still got one harddrive that is not in the raid5 , I got a raid 5 with 4 drives to begin with, before the crash0 -
The array is degraded, yes. To solve that you'll have to use the 'repair' button in the webinterface. But~ # cat /proc/partitionsmajor minor #blocks name<snip>8 0 3907018584 sda8 16 3907018584 sdb8 17 1998848 sdb18 18 1999872 sdb28 19 3903017984 sdb38 48 3907018584 sdd8 49 1998848 sdd18 50 1999872 sdd28 51 3903017984 sdd38 32 3907018584 sdc8 33 1998848 sdc18 34 1999872 sdc28 35 3903017984 sdc3your sda disk has lost it's partition table. So first check the SMART status of that disk. It might be damaged.still have disk group downWhat do you mean by that?
0 -
Hi
The harddisk is a brand new one, I've got one with a klonk noise and then the nas crashed.
There is no option to repair the array, when a restart the nas it's bepping and then I open the web interface I got a warning that the disk group is down and then a few second after I got the raid degraded.
0 -
The harddisk is a brand new one,
So you exchanged it after your array got degraded?
Can you see the content of /i-data/168e8bf4 while the system says the disk group is down?Does 'dmesg' show any I/O errors?0
Categories
- All Categories
- 415 Beta Program
- 2.5K Nebula
- 152 Nebula Ideas
- 101 Nebula Status and Incidents
- 5.8K Security
- 296 USG FLEX H Series
- 281 Security Ideas
- 1.5K Switch
- 77 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.5K Consumer Product
- 254 Service & License
- 396 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 87 About Community
- 76 Security Highlight