NAS310 - RAID1 failed, volume down, how do I recover data?

stest
stest Posts: 9  Freshman Member
edited July 2019 in Personal Cloud Storage
I've been running an NSA310 for a long time with just a 1TB hard drive inside, and now I am trying to add a second 1TB ESATA drive for a RAID1 setup. Not long after I clicked the migrate button to start, a the youngest in the house managed to grab the ESATA cable and it pulled out of the NAS. Now my volume is down, even though all it should have been doing was copying data to the other drive.

When I run the scan volume utility in the web client, I get the following results:
e2fsck 1.41.14 (22-Dec-2010)
The filesystem size (according to the superblock) is 244061472 blocks
The physical size of the device is 244061454 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? no
/dev/md0 contains a file system with errors check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md0: 399705/61022208 files (2.5% non-contiguous) 206351709/244061472 blocks

How do I recover my many years' worth of data?


#NAS_Jul_2019
«1

Comments

  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Can you open the Telnet backdoor, login over telnet, and post the output of

    <div>cat /proc/mdstat</div><div><br></div><div>cat /proc/mounts</div><div><br></div><div>cat /proc/partitions</div><div><br></div><div>su</div><div><br></div><div>mdadm --examine /dev/sda2</div><div></div>

  • stest
    stest Posts: 9  Freshman Member
    Let's see if this can get around the HTML filter...

    ~ $ cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] <br>md0 : active raid1 sda2[0]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 976245816 blocks super 1.0 [2/1] [U_]<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br>unused devices: [lessthan]none[greaterthan]<br>~ $ cat /proc/mounts<br>rootfs / rootfs rw 0 0<br>/proc /proc proc rw,relatime 0 0<br>/sys /sys sysfs rw,relatime 0 0<br>none /proc/bus/usb usbfs rw,relatime 0 0<br>devpts /dev/pts devpts rw,relatime,mode=600 0 0<br>/dev/mtdblock6 /zyxel/mnt/nand yaffs2 ro,relatime 0 0<br>/dev/sda1 /zyxel/mnt/sysdisk ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /ram_bin ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /usr ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /lib/security ext2 ro,relatime,errors=continue 0 0<br>/dev/loop0 /lib/modules ext2 ro,relatime,errors=continue 0 0<br>/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0<br>/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0<br>/dev/ram0 /usr/local/var tmpfs rw,relatime,size=5120k 0 0<br>/dev/mtdblock4 /etc/zyxel yaffs2 rw,relatime 0 0<br>/dev/mtdblock4 /usr/local/apache/web_framework/data/config yaffs2 rw,relatime 0 0<br>~ $ cat /proc/partitions<br>major minor&nbsp; #blocks&nbsp; name<br><br>&nbsp;&nbsp; 7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;&nbsp;&nbsp;&nbsp; 139264 loop0<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 976762584 sda<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp; 514048 sda1<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp; 976245952 sda2<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 16&nbsp; 976762584 sdb<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 17&nbsp;&nbsp;&nbsp;&nbsp; 514048 sdb1<br>&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 18&nbsp; 976245952 sdb2<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1024 mtdblock0<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 512 mtdblock1<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 512 mtdblock2<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 512 mtdblock3<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10240 mtdblock4<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10240 mtdblock5<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 48896 mtdblock6<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10240 mtdblock7<br>&nbsp; 31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 48896 mtdblock8<br>&nbsp;&nbsp; 9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 976245816 md0<br>~ $ su<br>Password: <br><br><br>BusyBox v1.17.2 (2016-03-11 17:11:16 CST) built-in shell (ash)<br>Enter 'help' for a list of built-in commands.<br><br>~ # mdadm --examine /dev/sda2<br>/dev/sda2:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Magic : a92b4efc<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Version : 1.0<br>&nbsp;&nbsp;&nbsp; Feature Map : 0x0<br>&nbsp;&nbsp;&nbsp;&nbsp; Array UUID : 898a84cb:95d74d20:df7e7952:01ede6f0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Name : nsa310:0&nbsp; (local to host nsa310)<br>&nbsp; Creation Time : Thu Jul 18 18:27:46 2019<br>&nbsp;&nbsp;&nbsp;&nbsp; Raid Level : raid1<br>&nbsp;&nbsp; Raid Devices : 2<br><br>&nbsp;Avail Dev Size : 976245816 (931.02 GiB 999.68 GB)<br>&nbsp;&nbsp;&nbsp;&nbsp; Array Size : 976245816 (931.02 GiB 999.68 GB)<br>&nbsp;&nbsp; Super Offset : 1952491888 sectors<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; State : clean<br>&nbsp;&nbsp;&nbsp; Device UUID : 77f39edb:6c997257:d8007d77:af1cd38a<br><br>&nbsp;&nbsp;&nbsp; Update Time : Tue Jul 30 20:47:29 2019<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Checksum : 88ee3ef8 - correct<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Events : 150<br><br><br>&nbsp;&nbsp; Device Role : Active device 0<br>&nbsp;&nbsp; Array State : A. ('A' == active, '.' == missing)<br><br>
  • stest
    stest Posts: 9  Freshman Member
    edited duplicate comment to not take up space, since I can't delete.
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    For some reason the filesystem on the raid array is bigger than the array itself. I think that is caused because it was a linear array, and now raid1, and the raid1 header needs some extra space.
    Fortunately the header is a version 1.0 header, which means it's at the end of the partition. So by acessing the filesystem on the partition itself, we get some extra space to play with.
    The partition is in use by the raid array, so to access it, we first have to stop the array.
    The procedure:
    stop raid array
    repair filesystem
    shrink filesystem
    start raid array
    grow filesystem on raid array

    In commands:
    <div>su<br></div><div>mdadm --stop /dev/md0</div><div><br></div><div>e2fsck -f /dev/sda2</div><div><br></div><div>resize2fs /dev/sda2 920G</div><div><br></div><div>mdadm --assemble /dev/md0 /dev/sda2 --run</div><div><br></div><div>resize2fs /dev/md0</div>
    If any of this commands fail, stop and post the results.

    It is possible that e2fsck is actually called e2fsck.new.  You can check that beforehand by simply executing it without arguments.
  • stest
    stest Posts: 9  Freshman Member
    I ran this and thought e2fsck's result wasn't a failure but now that I'm thinking about it, I'm a bit worried.
    <div>&nbsp;~ # mdadm --stop /dev/md0</div>mdadm: stopped /dev/md0<br>~ # e2fsck.new -f /dev/sda2<br>e2fsck 1.41.14 (22-Dec-2010)<br>Pass 1: Checking inodes, blocks, and sizes<br>Pass 2: Checking directory structure<br>Pass 3: Checking directory connectivity<br>Pass 4: Checking reference counts<br>Pass 5: Checking group summary information<br>/dev/sda2: 399705/61022208 files (2.5% non-contiguous), 206351709/244061472 blocks<br>~ # resize2fs /dev/sda2 920G<br>resize2fs 1.41.14 (22-Dec-2010)<br>Resizing the filesystem on /dev/sda2 to 241172480 (4k) blocks.<br>The filesystem on /dev/sda2 is now 241172480 blocks long.<br><br>~ # mdadm --assemble /dev/md0 /dev/sda2 --run<br>mdadm: /dev/md0 has been started with 1 drive (out of 2).<br>~ # resize2fs /dev/md0<br>resize2fs 1.41.14 (22-Dec-2010)<br>Resizing the filesystem on /dev/md0 to 244061454 (4k) blocks.<br>The filesystem on /dev/md0 is now 244061454 blocks long.<br>
    The browser GUI is now showing the volume as Degraded instead of Down, and it still shows as "84.30% (772.56 GB) Used", but I'm still unable to access anything stored on the drive.

    I did have another separate issue come up that having the ESATA drive would be useful for. At this point, I'd be happy with either the RAID 1 functioning as intended, or just having access to my files on the internal disk with no RAID like I did before (and I could then use the ESATA elsewhere). Thanks for your continuing help in either direction.
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    Hm. Your files should be visible by now. Have you tried to reboot the box?
  • stest
    stest Posts: 9  Freshman Member
    I thought I had, and I tried it again and am able to see a handful of files. I can see all the shares that were on it before; most are empty, and three of them have some small files visible.

    In the web GUI, I see the volume still degraded, with options to scan (with optional "Auto File Repair") or Repair the volume (as well as edit or delete, which I'm sure I don't want). Should these repair operations be safe to perform?
  • stest
    stest Posts: 9  Freshman Member
    I just now stopped and thought about the note on the volume page of the web gui:
    Note:
    When internal disk becomes defective while in RAID1 mode, the NSA will be in "uninitilized" state.
    You can bring the NSA to its normal state by using the external disk as the new internal disk.
    After login WEB GUI, you can repair the degraded RAID1 by another external disk.
    Does this mean to say that the only way to recover usage is by the repair button, letting it copy everything over to the ESATA drive to reconstruct the RAID1? I'm very apprehensive about just trying things on my own now, with all my data at stake.
  • Mijzelf
    Mijzelf Posts: 2,763  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary
    First, don't panic. According to e2fsck you have 399705 files, occupying 206351709*4k, which is a bit more than 800GB. e2fsck isn't complaining about anything, so the files are just there. The only problem is to find them back.
    Do you know an (exact) filename of one of the missing files? Then login on the telnet backdoor, and execute
    <div>cd /i-data/md0</div><div><br></div><div>find . | grep <filename></div><div></div>
    where <filename> is your file (When containing spaces, put it in quotes). It should show up, showing where it is.
    You can also get a complete listing by only running 'find .'

    Does this mean to say that the only way to recover usage is by the repair button, letting it copy everything over to the ESATA drive to reconstruct the RAID1?
    No. A degraded array can simply be used. It's only not redundant. This story is about a crashed internal disk, apparently the external is not visible then. You have to put the external disk inside to get a normal, degraded volume.






  • stest
    stest Posts: 9  Freshman Member
    I don't remember any specific filenames, but I tried a bunch of file extensions with nothing promising.
    ~ $ cd /i-data/md0/<br>/etc/zyxel/storage/sysvol $ find . | grep apk<br>/etc/zyxel/storage/sysvol $ find . | grep jpg<br>/etc/zyxel/storage/sysvol $ find . | grep jpeg<br>/etc/zyxel/storage/sysvol $ find . | grep png<br>/etc/zyxel/storage/sysvol $ find . | grep txt<br>./.zyxel/storage.txt<br>/etc/zyxel/storage/sysvol $ find . | grep mpg<br>/etc/zyxel/storage/sysvol $ find . | grep mpeg<br>/etc/zyxel/storage/sysvol $ find . | grep mov<br>/etc/zyxel/storage/sysvol $ find . | grep mkv<br>/etc/zyxel/storage/sysvol $ find . | grep mp4<br>/etc/zyxel/storage/sysvol $ find . | grep exe

Consumer Product Help Center