zyxel 325v2 migration fail

Options
cmbranco
cmbranco Posts: 6  Freshman Member
edited August 2018 in Personal Cloud Storage
Hello

I have a zyxel 325v2 working only with 1 HDD (3TB).
Meanwhile I bought a second HDD (same model as the existing one).
I´ve inserted it in the NAS and started the migration process to RAID1.
During the migration process I´ve got an error message (simply stated failed!).

Since then I cannot access the HDD (it does not mount).
I´ve already tried to removed the 2nd HDD but still not accessible.

Is there a way to revert this? Can I access directly to the HDD (using a simple HDD case?)

Thanks
#NAS_Aug
«13

All Replies

  • Mijzelf
    Mijzelf Posts: 2,645  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Can you open the telnet backdoor, login over telnet as root, and post the output of
    cat /proc/partitions
    cat /proc/mdstat
    mdadm --examine /dev/sd?2
  • cmbranco
    cmbranco Posts: 6  Freshman Member
    Options

    ~ $ cat /proc/partitions
    major minor  #blocks  name

       7        0     143360 loop0
       8        0 2930266584 sda
       8        1     498688 sda1
       8        2 2929766400 sda2
      31        0       1024 mtdblock0
      31        1        512 mtdblock1
      31        2        512 mtdblock2
      31        3        512 mtdblock3
      31        4      10240 mtdblock4
      31        5      10240 mtdblock5
      31        6      48896 mtdblock6
      31        7      10240 mtdblock7
      31        8      48896 mtdblock8
       9        0 2929765240 md0


    ~ $ cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1]
    md0 : active raid1 sda2[0]
          2929765240 blocks super 1.2 [2/1] [U_]

    unused devices: <none>

    ~ $ su mdadm --examine /dev/sda2
    su: unrecognized option '--examine'
    BusyBox v1.17.2 (2017-06-23 10:40:08 CST) multi-call binary.

    Usage: su [OPTIONS] [-] [USERNAME]





  • cmbranco
    cmbranco Posts: 6  Freshman Member
    Options
    ~ # mdadm --examine /dev/sda2
    /dev/sda2:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : add65c9f:ed27723d:2436a410:f2838abb
               Name : NSA325-v2:0  (local to host NSA325-v2)
      Creation Time : Sat Aug 18 20:10:36 2018
         Raid Level : raid1
       Raid Devices : 2

     Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
         Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
      Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
        Data Offset : 2048 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 04a3b600:193980ce:7170a53f:7e522d76

        Update Time : Tue Sep 25 16:30:33 2018
           Checksum : f2760361 - correct
             Events : 398


       Device Role : Active device 0
       Array State : A. ('A' == active, '.' == missing)






  • Mijzelf
    Mijzelf Posts: 2,645  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Hm.
    </code>Creation Time : Sat Aug 18 20:10:36 2018</pre></div><div>You initial post was from August 21, so I guess this array was created new at the moment you added the 2nd disk. I'm not sure if that is normal, intuitively I'd expect the original creation date of your original volume.<br></div><div class="Quote"><pre class="CodeBlock"><code>Update Time : Tue Sep 25 16:30:33 2018
    The array is in use! Are you sure disk isn't mounted? Or do you mean it doesn't mount on your PC? The first question can be answered with
    cat /proc/mounts
    If /dev/md0 shows up here, the array is mounted. In that case, do a
    ls -l /i-data/md0/
    to see if your shares are still there, and to see the creation time stamps of the build-in shares.

  • cmbranco
    cmbranco Posts: 6  Freshman Member
    Options
    Hello Mijzelf,

    Thanks alot for the suport.

    I had the NAS running for a couple of years only with 1 drive. On the 18th of August I added the second drive (same model as the existing one) and made a migration (via web interface) to RAID1.

    It appears that during this migration something went wrong and the drive is not available.
    I then removed the second drive (it was empty) and I´ve been trying to mount the drive (without any success).

    at a certain point I´ve found a posible solution on anothat forum:
    I´ve tried it and I could mount and list the content. Meanwhile when I was about to copy the content to another external drive the forum was no longer available and as I didn't save the commands I was again stuck.

    I believe is something with the "superblock". I´ve executed a scan and i get the following:
    e2fsck 1.41.14 (22-Dec-2010)
    The filesystem size (according to the superblock) is 732441344 blocks
    The physical size of the device is 732441310 blocks
    Either the superblock or the partition table is likely to be corrupt!

    I will execute the commands you mention latter today to see the result.




  • Mijzelf
    Mijzelf Posts: 2,645  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    <i>The filesystem size (according to the superblock) is 732441344 blocks</i><br><i>The physical size of the device is 732441310 blocks</i>
    Ah yes. I remember I've seen this before. It is caused by a conversion from a linear or raid0 array to raid1, in which case the size of the internal container shrinks a bit.
    A solution is to resize the inner filesystem:

  • cmbranco
    cmbranco Posts: 6  Freshman Member
    Options
    here is the result of the commands:
    ~ $ cat /proc/mounts<br>rootfs / rootfs rw 0 0<br>/proc /proc proc rw,relatime 0 0<br>/sys /sys sysfs rw,relatime 0 0<br>none /proc/bus/usb usbfs rw,relatime 0 0<br>devpts /dev/pts devpts rw,relatime,mode=600 0 0<br>/dev/mtdblock6 /zyxel/mnt/nand yaffs2 ro,relatime 0 0<br>/dev/sda1 /zyxel/mnt/sysdisk ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /ram_bin ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /usr ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /lib/security ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /lib/modules ext2 ro,relatime,errors=continue 0 0<br>/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0<br>/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0<br>/dev/ram0 /usr/local/var tmpfs rw,relatime,size=5120k 0 0<br>/dev/mtdblock4 /etc/zyxel yaffs2 rw,relatime 0 0<br>/dev/mtdblock4 /usr/local/apache/web_framework/data/config yaffs2 rw,relatime 0 0<br><br>
    ~ # ls -l /i-data/sda2
    ls: /i-data/sda2: No such file or directory



  • cmbranco
    cmbranco Posts: 6  Freshman Member
    Options
    ~ # ls -l /i-data/md0/
    ls: /i-data/md0/: No such file or directory
    ~ # ls -l /i-data/sda
    ls: /i-data/sda: No such file or directory
    ~ # ls -l /i-data/sda2
    ls: /i-data/sda2: No such file or directory



  • Mijzelf
    Mijzelf Posts: 2,645  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    OK, so indeed it's not mounted. Combined with your e2fsck output, it seems clear that the 'physical' device /dev/md0 has got a different size when adding the 2nd disk.
    Now you have two choices, trying to revert the raid configuration, or trying to resize the filesystem.
    In my opinion the 2nd is the most promising, as it's unknown what the previous raid configuration was.
    I refer to the link I gave 2 post ago.
  • Mijzelf
    Mijzelf Posts: 2,645  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    <i>The filesystem size (according to the superblock) is 732441344 blocks</i><br><i>The physical size of the device is 732441310 blocks</i>
    Ah yes. I remember I've seen this before. It is caused by a conversion from a linear or raid0 array to raid1, in which case the size of the internal container shrinks a bit.
    A solution is to resize the inner filesystem:


Consumer Product Help Center