Got a problem with NAS 542 raid 5 volume down

Axberg
Axberg Posts: 14  Freshman Member
edited August 2020 in Personal Cloud Storage
Hi!
How can I retrive my data from this NAS.
I'm a total newbie, so please bare with me.

~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdb3[1] sdc3[4] sdd3[3]
      11708660736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]

md1 : active raid1 sdc2[4] sdd2[5] sdb2[1]
      1998784 blocks super 1.2 [4/3] [_UUU]

md0 : active raid1 sdb1[6] sdd1[5] sdc1[4]
      1997760 blocks super 1.2 [4/3] [U_UU]

~ # mdadm --examine /dev/sd[abcd]3
mdadm: cannot open /dev/sda3: No such device or address
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
           Name : NAS542:2  (local to host NAS542)
  Creation Time : Mon Feb 19 16:18:55 2018
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : ad1bbcba:851fe7ce:9b4e0515:15f8029b

    Update Time : Tue Jul 28 17:02:21 2020
       Checksum : ec098281 - correct
         Events : 18103

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : .AAA ('A' == active, '.' == missing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
           Name : NAS542:2  (local to host NAS542)
  Creation Time : Mon Feb 19 16:18:55 2018
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 0182f1cb:4bcb13a7:0ccde221:ae77558d

    Update Time : Tue Jul 28 17:02:21 2020
       Checksum : d49b92a8 - correct
         Events : 18103

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : .AAA ('A' == active, '.' == missing)
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
           Name : NAS542:2  (local to host NAS542)
  Creation Time : Mon Feb 19 16:18:55 2018
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 6834c489:800c6cb4:67c317fc:4e3d686f

    Update Time : Tue Jul 28 17:02:21 2020
       Checksum : 5c0e42bf - correct
         Events : 18103

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2

#NAS_August_2020


«13

All Replies

  • Mijzelf
    Mijzelf Posts: 1,977  Guru Member
    According to this data your raid array is running, yet degraded. What is the output of

    cat /proc/mounts
  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi Mijzelf

    ~ # cat /proc/mounts
    rootfs / rootfs rw 0 0
    /proc /proc proc rw,relatime 0 0
    /sys /sys sysfs rw,relatime 0 0
    none /proc/bus/usb usbfs rw,relatime 0 0
    devpts /dev/pts devpts rw,relatime,mode=600 0 0
    ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0
    /dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordere                                                                                                             d 0 0
    /dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
    /dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
    ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0
    configfs /sys/kernel/config configfs rw,relatime 0 0

    What to do now?
  • Mijzelf
    Mijzelf Posts: 1,977  Guru Member
    /dev/md2 should have been mounted, but it isn't. What if you try to mount it manually?

    su
    mkdir /mnt/mountpoint
    mount /dev/md2 /mnt/mountpoint
    dmesg | tail

    It is also possible that the data partition is not directly on /dev/md2, but in a Logical Volume. To see that, use

    cat /proc/partitions
    lvscan && lvdisplay --all
  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi again

    As I said I'm a total newbie to linux and any help is mostly welcome.

    ~ # mkdir /mnt/mountpoint
    mkdir: can't create directory '/mnt/mountpoint': File exists
    ~ # mount /dev/md2 /mnt/mountpoint
    mount: unknown filesystem type 'LVM2_member'
    ~ # dmesg | tail
    [ 2340.614356]
    [ 2340.614359] ****** disk(1:0:0:0) spin down at 204061 ******
    [ 2341.430716]
    [ 2341.430719] ****** disk(3:0:0:0) spin down at 204143 ******
    [ 2342.394420]
    [ 2342.394423] ****** disk(2:0:0:0) spin down at 204239 ******
    [78376.998914]
    [78376.998917] ****** disk(1:0:0:0 0)(HD2) awaked by lvscan (cmd: 28) ******
    [78390.843568]
    [78390.843571] ****** disk(2:0:0:0 0)(HD3) awaked by lvscan (cmd: 28) ******

    ~ # cat /proc/partitions
    major minor  #blocks  name

       7        0     147456 loop0
      31        0        256 mtdblock0
      31        1        512 mtdblock1
      31        2        256 mtdblock2
      31        3      10240 mtdblock3
      31        4      10240 mtdblock4
      31        5     112640 mtdblock5
      31        6      10240 mtdblock6
      31        7     112640 mtdblock7
      31        8       6144 mtdblock8
       8        0 3907018584 sda
       8       16 3907018584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 3903017984 sdb3
       8       48 3907018584 sdd
       8       49    1998848 sdd1
       8       50    1999872 sdd2
       8       51 3903017984 sdd3
       8       32 3907018584 sdc
       8       33    1998848 sdc1
       8       34    1999872 sdc2
       8       35 3903017984 sdc3
      31        9     102424 mtdblock9
       9        0    1997760 md0
       9        1    1998784 md1
      31       10       4464 mtdblock10
       9        2 11708660736 md2
     253        0     102400 dm-0
     253        1 11708555264 dm-1
    ~ # lvscan && lvdisplay --all
      ACTIVE            '/dev/vg_c1b2735e/vg_info_area' [100.00 MiB] inherit
      ACTIVE            '/dev/vg_c1b2735e/lv_168e8bf4' [10.90 TiB] inherit
      --- Logical volume ---
      LV Path                /dev/vg_c1b2735e/vg_info_area
      LV Name                vg_info_area
      VG Name                vg_c1b2735e
      LV UUID                SCecoC-nAfM-B8BV-mphS-QtBe-OLsT-J2wyF4
      LV Write Access        read/write
      LV Creation host, time NAS542, 2018-02-19 16:18:57 +0100
      LV Status              available
      # open                 0
      LV Size                100.00 MiB
      Current LE             25
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     1024
      Block device           253:0

      --- Logical volume ---
      LV Path                /dev/vg_c1b2735e/lv_168e8bf4
      LV Name                lv_168e8bf4
      VG Name                vg_c1b2735e
      LV UUID                2If3DE-2zBN-mlC4-PDiv-PqQ5-P9VD-iwOL7S
      LV Write Access        read/write
      LV Creation host, time NAS542, 2018-02-19 16:18:57 +0100
      LV Status              available
      # open i'm                 0
      LV Size                10.90 TiB
      Current LE             2858534
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     1024
      Block device           253:1

  • Mijzelf
    Mijzelf Posts: 1,977  Guru Member
    mount: unknown filesystem type 'LVM2_member'

    OK, you indeed have a logical volume. (LVM is logical volume manager), which is confirmed by lv_display.

    Your data volume is /dev/vg_c1b2735e/lv_168e8bf4, so try to mount that

    mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint

    dmesg | tail


  • Axberg
    Axberg Posts: 14  Freshman Member
    ~ # mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint
    mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_c1b2735e-lv_168e8bf4,
           missing codepage or helper program, or other error

           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    ~ # dmesg | tail
    dmesg | tail

    [79325.182629]
    [79325.182631] ****** disk(2:0:0:0) spin down at 7902518 ******
    [87365.942603]
    [87365.942606] ****** disk(2:0:0:0 0)(HD3) awaked by mount (cmd: 88) ******
    [87372.805601]
    [87372.805604] ****** disk(1:0:0:0 0)(HD2) awaked by mount (cmd: 88) ******
    [87386.728280]
    [87386.728283] ****** disk(3:0:0:0 0)(HD4) awaked by mount (cmd: 88) ******
    [87394.376921] JBD2: no valid journal superblock found
    [87394.381842] EXT4-fs (dm-1): error loading journal
    ~ #
    ~ # dmesg | tail
    [79325.182629]
    [79325.182631] ****** disk(2:0:0:0) spin down at 7902518 ******
    [87365.942603]
    [87365.942606] ****** disk(2:0:0:0 0)(HD3) awaked by mount (cmd: 88) ******
    [87372.805601]
    [87372.805604] ****** disk(1:0:0:0 0)(HD2) awaked by mount (cmd: 88) ******
    [87386.728280]
    [87386.728283] ****** disk(3:0:0:0 0)(HD4) awaked by mount (cmd: 88) ******
    [87394.376921] JBD2: no valid journal superblock found
    [87394.381842] EXT4-fs (dm-1): error loading journal
    ~ #
    ~ #

  • Mijzelf
    Mijzelf Posts: 1,977  Guru Member
    [87394.376921] JBD2: no valid journal superblock found
    [87394.381842] EXT4-fs (dm-1): error loading journal

    There is a problem with the journal. Maybe e2fsck can fix that.

    e2fsck /dev/vg_c1b2735e/lv_168e8bf4

  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi Mijzelf
    Now the journal is fixed by e2fsck, but I still hav e a problen, fault go from volume down to 
    Disk Group is down.and RAID is degraded..
    I've tried
     mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint
    mount: mount point /mnt/mountpoint does not exist

    Can you give me a hint to fix my nas.?



  • Mijzelf
    Mijzelf Posts: 1,977  Guru Member
    Rebooted in between I presume? The whole rootfilesystem of the NAS is volatile, so you'll have to repeat all changes you made after a reboot.

    mkdir /mnt/mountpoint
  • Axberg
    Axberg Posts: 14  Freshman Member
    Hi

    I will start over and then I will remember to restart, as I have already said this is completely new to me, I try again and return with a result