NAS310 - RAID1 failed, volume down, how do I recover data?
Comments
-
$ cd /i-data/md0/<br>/etc/zyxel/storage/sysvol $
That looks strange. Normally /i-data/md0 is a symlink to /etc/zyxel/storage/sysvol, which is a symlink to the data volume in /i-data/, and so I expect it to end in /i-data/<something>, and not in /etc/zyxel/storage/sysvol.Possibly that symlink is damaged. Can you post the output of<div>ls -l /etc/zyxel/storage/</div><div><br></div><div>ls -l /i-data/</div><div></div>
0 -
~ $ ls -l /etc/zyxel/storage/<br>-rw-rw-rw- 1 root root 433 Apr 25 2017 extuuid.table<br>-rw-rw-rw- 1 root root 44 Jun 23 2012 mduuid.table<br>drwxrwxrwx 1 root root 2048 Jul 29 2013 sysvol<br>-rw-rw-rw- 1 root root 29 Apr 24 2017 usbcopy.table<br>-rw-rw-rw- 1 root root 44 Apr 24 2017 usbzync.table<br>~ $ ls -l /i-data/<br>drwxrwxrwx 17 root root 4096 Aug 1 19:10 898a84cb<br>lrwxrwxrwx 1 root root 25 Aug 4 19:15 md0 -> /etc/zyxel/storage/sysvol<br>
0 -
That is strange. sysvol is a directory, and it is created in 2013! How can it ever have worked?Anyway, let's rename it, and create a symlink
/etc/zyxel/storage/sysvol
/etc/zyxel/storage/sysvol.old<br>ln -s /i-data/
898a84cb
/etc/zyxel/storage/sysvol<br>reboot
<code>su<br>mv
1 -
2013 is a reasonable estimate for when I started using this drive in the NAS, it might have been even earlier. The 310 is pretty old hardware lol. If that was a deciding factor, should I still follow those steps?
0 -
Yes. If it doesn't lead to anything, the directory can be changed back. But As far as I remember even my NSA220 from 2008 had a symlink for sysvol.
1 -
Just a silly idea from my if it isn't resolved all ready.
Remove your second added 1 Tb drive from your nas. Reformat this drive for example in an other computer. Put this drive back in your nas and start all over.
Just my thoughts about what I would do if everything else fails.
NAS326 + MetaRepository + Entware0 -
I've been running an NSA310 for a long time with just a 1TB hard drive inside, and now I am trying to add a second 1TB ESATA drive for a RAID1 setup. Not long after I clicked the migrate button to start, a the youngest in the house managed to grab the ESATA cable and it pulled out of the NAS. Now my volume is down, even though all it should have been doing was copying data to the other drive.
Hello, I am in very similar situation now, except two things:
1) my eSATA drive is 4TB (internal is 1TB)
2) external drive was connected all the time, but WebGUI returned "failed" during migration process
Do different disk sizes matter and could caused the failure? The migrate button in web client was enabled and clickable although i did not see eSATA disk there.
Here are results of scan utility (it contains warning):e2fsck 1.41.14 (22-Dec-2010) The filesystem size (according to the superblock) is 244061472 blocks The physical size of the device is 244061454 blocks Either the superblock or the partition table is likely to be corrupt! Abort? no /dev/md0 contains a file system with errors check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. Create? no Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/md0: ********** WARNING: Filesystem still has errors ********** /dev/md0: 180825/61022208 files (12.2% non-contiguous) 240908460/244061472 blocks
I also tried those cat /proc commands with same results as user 'stest' - sizes,number of blocks are same (except I have 4TB eSATA instead of 1TB)I found this in dmesg:raid1: raid set md0 active with 1 out of 2 mirrors md0: detected capacity change from 0 to 999675715584 md0: unknown partition table EXT4-fs (md0): bad geometry: block count 244061472 exceeds size of device (244061454 blocks)<br>
I am worried about running resize2fs, because of the e2fsck warning and because the drive had nearly no free space left (more than 98% space used).
What should I do next to rescue data?
0 -
I am worried about running resize2fs, because of the e2fsck warning and because the drive had nearly no free space left (more than 98% space used).
If you have read the previous posts, you know that the e2fsck warning is just a symptom of the problem. That's why the resize2fs (and e2fsck) is run on the partition, and not on the array.
More than 98% used is indeed a problem. You could use the -M flag (man resize2fs), but I don't know how 'good' resize2f is. It won't kill the filesystem, but I don't know if it will squeeze enough files in unused space to be able to shrink the partition. On the other hand, you only need to shrink it 18kB.
0 -
Thank you Mijzelf, although I finaly didn't try resize2fs.
Firstly I cloned internal 1TB disk to external 4TB using dd if=/dev/sda of=/dev/sdb. Then I swapped disks and tried to check and repair cloned disk via e2fsck. But no luck, because for some reason, e2fsck was killed every time I tried, because it went out of memory.
Luckily my data seemed to be ok - I was able to mount partition, copy some randomly chosen data to USB drive and check/view them on PC.
So I gave up on repairing RAIDed drive. I chose to go clean way.
I did factory reset, created new volume on 4TB disk, basic configuration and now I am copying data via eSATA to internal drive in terminal using cp.
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 149 Nebula Ideas
- 96 Nebula Status and Incidents
- 5.7K Security
- 263 USG FLEX H Series
- 271 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.4K Consumer Product
- 249 Service & License
- 387 News and Release
- 84 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.5K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 73 Security Highlight