Got a problem with NAS 542 raid 5 volume down

2

All Replies

  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi
    Maybe I'm just to stupid to understand, wh.en I make the directory and then mount, then I make a reboot and I got the same 2 failure as before

    ~ # mkdir /mnt/mountpoint
    ~ #  mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint

  • Axberg
    Axberg Posts: 14  Freshman Member
    Tried this after reboot:
    ~ # mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint
    mount: mount point /mnt/mountpoint does not exist
    ~ #
    ~ # e2fsck /dev/vg_c1b2735e/lv_168e8bf4
    e2fsck 1.42.12 (29-Aug-2014)
    /dev/vg_c1b2735e/lv_168e8bf4 is mounted.
    e2fsck: Cannot continue, aborting.

  • Mijzelf
    Mijzelf Posts: 2,560  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    /dev/vg_c1b2735e/lv_168e8bf4 is mounted.
    So it's already mounted. That is not surprising, as that is the default situation. The only reason that is was not mounted, was because of the error, which you repaired.
    Have you already checked if your shares are back?

    If not, you can find the current mountpoint using

    cat /proc/mounts

    and look into the filesystem using

    ls -l <mountpoint>

    where you have to substitute the 'real'  mountpoint.
  • Axberg
    Axberg Posts: 14  Freshman Member
    ~ # cat /proc/mounts
    rootfs / rootfs rw 0 0
    /proc /proc proc rw,relatime 0 0
    /sys /sys sysfs rw,relatime 0 0
    none /proc/bus/usb usbfs rw,relatime 0 0
    devpts /dev/pts devpts rw,relatime,mode=600 0 0
    ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0
    /dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0
    /dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
    /dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
    ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0
    /dev/mapper/vg_c1b2735e-lv_168e8bf4 /i-data/168e8bf4 ext4 rw,noatime,user_xattr,barrier=1,stripe=48,data=ordered,usrquota 0 0
    configfs /sys/kernel/config configfs rw,relatime 0 0

    ~ # ls -l <mountpoint>
    sh: syntax error: unexpected newline


  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi
    Here are the new
    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid5 sdb3[1] sdc3[4] sdd3[3]
          11708660736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]

    md1 : active raid1 sdc2[4] sdd2[5] sdb2[1]
          1998784 blocks super 1.2 [4/3] [_UUU]

    md0 : active raid1 sdb1[6] sdd1[5] sdc1[4]
          1997760 blocks super 1.2 [4/3] [U_UU]
    unused devices: <none>

    ~ # mdadm --examine /dev/sd[abcd]3
    mdadm: cannot open /dev/sda3: No such device or address
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Mon Feb 19 16:18:55 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : ad1bbcba:851fe7ce:9b4e0515:15f8029b

        Update Time : Mon Aug  3 18:44:22 2020
           Checksum : ec118de4 - correct
             Events : 20785

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Mon Feb 19 16:18:55 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 0182f1cb:4bcb13a7:0ccde221:ae77558d

        Update Time : Mon Aug  3 18:44:22 2020
           Checksum : d4a39e0b - correct
             Events : 20785

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Mon Feb 19 16:18:55 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 6834c489:800c6cb4:67c317fc:4e3d686f

        Update Time : Mon Aug  3 18:44:22 2020
           Checksum : 5c164e22 - correct
             Events : 20785

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AAA ('A' == active, '.' == missing)
    ~ #

    I am very grateful for your attempt to help me
  • Mijzelf
    Mijzelf Posts: 2,560  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    /dev/mapper/vg_c1b2735e-lv_168e8bf4 /i-data/168e8bf4 ext4

    The filesystem is mounted on /i-data/168e8bf4, so the view command should be

    ls -l /i-data/168e8bf4



  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi again
    Got this, but still have disk group down and raid degraded.

    ~ # ls -l /i-data/168e8bf4
    drwxrwxrwx    4 root     root          4096 Feb 20  2018 admin
    -rw-------    1 root     root          9216 Feb 19  2018 aquota.user
    drwx------    2 root     root         16384 Feb 19  2018 lost+found
    drwxrwxrwx    2 root     root          4096 Apr 17 04:38 music
    drwxrwxrwx    2 root     root          4096 Jan 12  2020 photo
    drwxrwxrwx  510 root     root         20480 May  7 16:30 video
    ~ #
    How to go from here?
    I still got one harddrive that is not in the raid5 , I got a raid 5 with 4 drives to begin with, before the crash
  • Mijzelf
    Mijzelf Posts: 2,560  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    The array is degraded, yes. To solve that you'll have to use the 'repair' button in the webinterface. But

    ~ # cat /proc/partitions
    major minor  #blocks  name
    <snip>
       8        0 3907018584 sda
       8       16 3907018584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 3903017984 sdb3
       8       48 3907018584 sdd
       8       49    1998848 sdd1
       8       50    1999872 sdd2
       8       51 3903017984 sdd3
       8       32 3907018584 sdc
       8       33    1998848 sdc1
       8       34    1999872 sdc2
       8       35 3903017984 sdc3

    your sda disk has lost it's partition table. So first check the SMART status of that disk. It might be damaged.

    still have disk group down
    What do you mean by that?

  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi
    The harddisk is a brand new one, I've got one with a klonk noise and then the nas crashed.
    There is no option to repair the array, when a restart the nas it's bepping and then I open the web interface I got a warning that the disk group is down and then a few second after I got the raid degraded.

  • Mijzelf
    Mijzelf Posts: 2,560  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    The harddisk is a brand new one,

    So you exchanged it after your array got degraded?

    Can you see the content of /i-data/168e8bf4 while the system says the disk group is down?

    Does 'dmesg' show any I/O errors?

Consumer Product Help Center