NAS540 data lost on harddisk after factory reset

Remco van Kuilenburg
Remco van Kuilenburg Posts: 4  Freshman Member
edited January 2020 in Personal Cloud Storage
Hi Guys,

I hope you can help me. This is now the second time I have this problem. My NAS540 crashed last week and had to do a factory reset. After the reboot the system was telling me found new discs. Option was adopt it and after that I had to press okay to format the disk. I did not press this off course because all my data will be lost.

I had the same some years ago with the same disk and I lost my data, it is not lost but can not get to it. So not to run into the same problems I do not connect this disk.

Is there anybody who had the same and know a solution.

Cheers,

Remco


#NAS_Jan_2020
«1

All Replies

  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Power surge?

    Anyway, you are not the only one who lost it's volume after an unclean shutdown. Read here.

    To find out how your damage is, you'll have to enable the ssh server, login over ssh, and post the output of

    cat /proc/partitions
    cat /proc/mdstat
    cat /proc/mounts
    su
    mdadm --examine /dev/sd[abcd]3


  • Remco van Kuilenburg
    Remco van Kuilenburg Posts: 4  Freshman Member
    edited January 2020
    Hi,

    Thanks for your reply. For me it is like talking in a strange language to me. I'm not a skilled programmer and tried to do the SSH part and got errors.

    I'm using a Mac and tried with terminal

    MacBook-Pro:~ remcovankuilenburg$ ssh 192.168.1.14

    ssh: connect to host 192.168.1.14 port 22: Connection refused


  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    You have to enable the server first, in Control Panel->Terminal. Then connect with

    ssh admin@192.168.1.14
  • Remco van Kuilenburg
    Remco van Kuilenburg Posts: 4  Freshman Member
    Aha that helps.

    I got the code here.

    ~ $ cat /proc/partitions

    major minor  #blocks  name


       7        0     147456 loop0

      31        0        256 mtdblock0

      31        1        512 mtdblock1

      31        2        256 mtdblock2

      31        3      10240 mtdblock3

      31        4      10240 mtdblock4

      31        5     112640 mtdblock5

      31        6      10240 mtdblock6

      31        7     112640 mtdblock7

      31        8       6144 mtdblock8

       8        0 3907018584 sda

       8        1    1998848 sda1

       8        2    1999872 sda2

       8        3 3903017984 sda3

      31        9     102424 mtdblock9

       9        0    1997760 md0

       9        1    1998784 md1

      31       10       4464 mtdblock10

       9        2 3902886720 md2



    ~ $ cat /proc/mdstat

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

    md2 : active raid1 sda3[0]

          3902886720 blocks super 1.2 [1/1] [U]

          

    md1 : active raid1 sda2[0]

          1998784 blocks super 1.2 [4/1] [U___]

          

    md0 : active raid1 sda1[5]

          1997760 blocks super 1.2 [4/1] [U___]

          

    unused devices: <none>


    ~ $ cat /proc/mounts

    rootfs / rootfs rw 0 0

    /proc /proc proc rw,relatime 0 0

    /sys /sys sysfs rw,relatime 0 0

    none /proc/bus/usb usbfs rw,relatime 0 0

    devpts /dev/pts devpts rw,relatime,mode=600 0 0

    ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0

    /dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0

    /dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0

    /dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0

    /dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0

    /dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0

    /dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0

    /dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0

    /dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0

    ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0

    /dev/md2 /i-data/083cd5b1 ext4 rw,noatime,user_xattr,barrier=1,stripe=16,data=ordered,usrquota 0 0

    /dev/md2 /usr/local/apache/htdocs/desktop,/pkg ext4 rw,noatime,user_xattr,barrier=1,stripe=16,data=ordered,usrquota 0 0

    configfs /sys/kernel/config configfs rw,relatime 0 0



    ~ $ su

    Password: 



    BusyBox v1.19.4 (2019-09-04 14:33:19 CST) built-in shell (ash)

    Enter 'help' for a list of built-in commands.


    ~ # mdadm --examine /dev/sd[abcd]3

    /dev/sda3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 083cd5b1:9a1d86e3:22031a75:be24d2be

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Sun Jan 17 07:28:01 2016

         Raid Level : raid1

       Raid Devices : 1


     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)

         Array Size : 3902886720 (3722.08 GiB 3996.56 GB)

      Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : febebea7:d85c686d:05081d38:1cc11c9f


        Update Time : Mon Feb 17 20:41:03 2020

           Checksum : 1d8ed1bc - correct

             Events : 14



       Device Role : Active device 0

       Array State : A ('A' == active, '.' == missing)

    mdadm: cannot open /dev/sdb3: No such device or address

    mdadm: cannot open /dev/sdc3: No such device or address

    mdadm: cannot open /dev/sdd3: No such device or address


  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Do you have only one disk in that NAS? According to /proc/partitions there is only one disk, and that disk is fine. The raid array on the data partition is assembled and mounted.
  • Remco van Kuilenburg
    Remco van Kuilenburg Posts: 4  Freshman Member
    edited February 2020
    Yes only one. I have got other ones out after the " crash" The "old" partition I can get in now and data is there. But there is also a second partition I can't enter. That was some years ago after a software upgrade of zyxel. In that partition are all pictures of my son etc. It is there but can not get in.
  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    On this disk there is no 2nd data partition.

    The disk is 4TB
    <div>&nbsp;&nbsp; 8&nbsp; &nbsp; &nbsp; &nbsp; 0 3907018584 sda</div>
    The data partition is only slightly smaller
    8&nbsp; &nbsp; &nbsp; &nbsp; 3 3903017984 sda3
    And the raid array on that data partition fills the whole partition

    <div><p>md2 : active raid1 sda3[0]</p>&nbsp; &nbsp; &nbsp; 3902886720 blocks super 1.2 [1/1] [U]</div>

    But there could be a non-active share. Have a look in your shares menu to see if that is the case.

  • PawelS
    PawelS Posts: 8  Freshman Member
    Hi,
    I have similiar problem, my nas is NAS326, but I suppose it doesn't matter.

    my data:
    ~ # cat /proc/partitions
    major minor  #blocks  name

       7        0     146432 loop0
      31        0       2048 mtdblock0
      31        1       2048 mtdblock1
      31        2      10240 mtdblock2
      31        3      15360 mtdblock3
      31        4     108544 mtdblock4
      31        5      15360 mtdblock5
      31        6     108544 mtdblock6
       8        0  976762584 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3  972762112 sda3
       8       16  976762584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19  972762112 sdb3
       9        0    1997760 md0
       9        1    1998784 md1
       9        2  972630848 md2
       9        3  972630848 md3
     253        0     102400 dm-0
     253        1  972525568 dm-1
     253        2     102400 dm-2
     253        3  972525568 dm-3
    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
    md3 : active raid1 sda3[0]
          972630848 blocks super 1.2 [1/1] [U]

    md2 : active raid1 sdb3[0]
          972630848 blocks super 1.2 [1/1] [U]

    md1 : active raid1 sda2[0] sdb2[2]
          1998784 blocks super 1.2 [2/2] [UU]

    md0 : active raid1 sda1[0] sdb1[2]
          1997760 blocks super 1.2 [2/2] [UU]

    unused devices: <none>
    ~ # cat /proc/mounts
    rootfs / rootfs rw 0 0
    /proc /proc proc rw,relatime 0 0
    /sys /sys sysfs rw,relatime 0 0
    devpts /dev/pts devpts rw,relatime,mode=600 0 0
    ubi4:ubi_rootfs1 /firmware/mnt/nand ubifs ro,relatime 0 0
    /dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,data=ordered 0 0
    /dev/loop0 /ram_bin ext2 ro,relatime 0 0
    /dev/loop0 /usr ext2 ro,relatime 0 0
    /dev/loop0 /lib/security ext2 ro,relatime 0 0
    /dev/loop0 /lib/modules ext2 ro,relatime 0 0
    /dev/loop0 /lib/locale ext2 ro,relatime 0 0
    /dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
    /dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
    ubi2:ubi_config /etc/zyxel ubifs rw,relatime 0 0
    configfs /sys/kernel/config configfs rw,relatime 0 0
    ~ # mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 3fa2ac41:656e56d2:686f9f1f:1b51f286
               Name : NAS326:2  (local to host NAS326)
      Creation Time : Mon May 25 03:57:52 2015
         Raid Level : raid1
       Raid Devices : 1

     Avail Dev Size : 1945262080 (927.57 GiB 995.97 GB)
         Array Size : 972630848 (927.57 GiB 995.97 GB)
      Used Dev Size : 1945261696 (927.57 GiB 995.97 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : c93ceca9:5ab09837:787ce117:f2a96ff9

        Update Time : Tue Jun 30 17:43:12 2020
           Checksum : 58295563 - correct
             Events : 16


       Device Role : Active device 0
       Array State : A ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : e497df71:99394e60:2cff4d77:9efb710f
               Name : NAS326:3  (local to host NAS326)
      Creation Time : Sun Oct 29 11:58:05 2017
         Raid Level : raid1
       Raid Devices : 1

     Avail Dev Size : 1945262080 (927.57 GiB 995.97 GB)
         Array Size : 972630848 (927.57 GiB 995.97 GB)
      Used Dev Size : 1945261696 (927.57 GiB 995.97 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 67b61188:5f31e662:1f99eab7:9b372964

        Update Time : Tue Jun 30 16:15:22 2020
           Checksum : 104b055f - correct
             Events : 2


       Device Role : Active device 0
       Array State : A ('A' == active, '.' == missing)
    mdadm: cannot open /dev/sdc3: No such device or address
    mdadm: cannot open /dev/sdd3: No such device or address

    What can I do not to lose all data I have on my 2 disks?
    Please help.
    TIA

  • PawelS
    PawelS Posts: 8  Freshman Member
    Additional info is that there was no array. Disk worked separatly. 
  • Mijzelf
    Mijzelf Posts: 2,790  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    253        1  972525568 dm-1
    253        3  972525568 dm-3
    You have logical volumes on your single disk arrays. These should be mounted, but aren't. When it was only a single disk, I'd expect a filesystem error, but 2? What happened before you lost your volumes?

    What happens if you try to mount them?

    su
    mkdir /tmp/mountpoint
    mount /dev/dm-1 /tmp/mountpoint
    dmesg | tail


Consumer Product Help Center