NAS520: Restore access to lost volume/disk 1 [SOLVED]
ariek
Posts: 30 Freshman Member
I have lost access to volume 1/disk 1/sysvol. I can logon to the NAS520 and when I choose "Storage Manager" it tells me that there is "No Storage" on the internal volume (external drives are not connected). Obviously, I can create a new volume but this will erase all data on the HDD will be lost.
However, when I click on other icons in the webgui, everything that is on the HDD is visible/present.
#NAS_Nov_2019
#NAS_Nov_2019
0
Comments
-
What kind of RAID Type you used? Is there any screenshots?
When you used File Browser of NAS520 or CIFS or FTP or ..., do you able to access your data?
If yes, I suggest you to backup your data first.
0 -
It is only 1 HDD. JBOD structure and that is to my knowledge just a plain ext4 drive.The status is LOST, internal HDD, and can't access the data because the volume doesn't exist (not recognized). I have to create a new volume but that will format the drive and all data will be lost.With MiniTools Partition Manager the HDD (note: I have changed some partitions from primary to logical)When I hook up the drive as an external drive I can access the drive content. The 'main partition' [with all my files, etc.] and the 'system partition' (which contains a system.img).The file structure:I'm able to access the drive content when the HDD is mounted as an external drive (USB1) but I can't when the drive is an internal drive, as the drive is not mounted or not recognized as a valid volume.If only I could fix the mounting of the internal HDD. Theoretically, the NAS should be functioning just fine.I can connect to the NAS over telnet/SSH but only have access to the ram drive.
0 -
(note: I have changed some partitions from primary to logical)
Why? In that case the disk is no longer recognized as internal disk. At boot this script is run to check if it's a valid internal disk:
<p>#action: check is legal disk partition format<br># 1. check in internal disk<br># 2. check disk partition number<br># 3. check FW size<br># 4. check swap size<br>CheckInternalDiskFormat()<br>{<br> sdx=$1<br> IsInternDisk="`${INTERN_DISK_CHKER} -c ${sdx}`"<br> if [ "${IsInternDisk}" != "yes" ]; then<br> return ${FAIL}<br> fi<br><br> numOfDiskPart=`${LS} -d /sys/block/${sdx}/${sdx}? | ${WC} -l`<br> if [ ${numOfDiskPart} != ${LEGAL_DISK_PARTITION_NUM} ]; then<br> return ${FAIL}<br> fi<br><br> sdx1Siz=`${CAT} /sys/block/${sdx}/${sdx}1/size`<br> if [ ${sdx1Siz} != ${LEGAL_FW_SIZ} ]; then<br><br> return ${FAIL}<br> fi<br><br> sdx2Siz=`${CAT} /sys/block/${sdx}/${sdx}2/size`<br> if [ ${sdx2Siz} != ${LEGAL_SWAP_SIZ} ]; then<br> return ${FAIL}<br> fi<br><br> return ${SUCCESS}<br>}<br></p>
It checks partitions 1 and 2 for their sizes. A problem is that the primary partitions are numbered 1..4, while the logical partitions are numbered 5... . So this disk is no longer valid. Can you change that back?
0 -
I can change the partitions back from logical to primary with MiniTool Partition manager.After changing it, the HDD is still not recognized as an internal HDD/Volume.
# cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]<br>unused devices: <none>
# fdisk -l<br><br>Disk /dev/loop0: 144 MiB, 150994944 bytes, 294912 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock0: 256 KiB, 262144 bytes, 512 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock1: 512 KiB, 524288 bytes, 1024 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock2: 256 KiB, 262144 bytes, 512 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock3: 10 MiB, 10485760 bytes, 20480 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock4: 10 MiB, 10485760 bytes, 20480 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock5: 110 MiB, 115343360 bytes, 225280 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock6: 10 MiB, 10485760 bytes, 20480 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock7: 110 MiB, 115343360 bytes, 225280 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock8: 6 MiB, 6291456 bytes, 12288 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 4096 bytes<br>I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br>Disklabel type: dos<br>Disk identifier: 0x1394754f<br><br>Device Boot Start End Sectors Size Id Type<br>/dev/sda1 4096 3999615 3995520 1.9G 83 Linux<br>/dev/sda2 8261632 3907028607 3898766976 1.8T 83 Linux
I'm not sure if the partition IDs (83) are correct.<div><br></div><div>43 Linux Native (sharing disk with DRDOS)<br></div><div>82 Linux swap<br></div><div>83 Linux native</div><div><br></div><div>85 Extended Linux</div><div><br></div><div>88 Linux plain text partition table</div><div><br></div><div>F0 Linux/PA-RISC Boot Loader<br></div><div>FD Linux Raid Auto</div><div><br></div>
0 -
Hm. That didn't work out as intended. Somehow the size of the partition 1 changed by converting to logical and back, and somehow your 2nd partition is lost.
/dev/sda1 4096 3999615 3995520 1.9G 83 Linux
The script wants another size:LEGAL_FW_SIZ=3997696 #2047MB using parted created<br>LEGAL_SWAP_SIZ=3999744 #2048MB using parted created<br>
and indeed my 520 has this sizes:<div>Device Start End Sectors Size Type<br>/dev/sda1 2048 3999743 3997696 1.9G Linux RAID<br>/dev/sda2 3999744 7999487 3999744 1.9G Linux RAID<br>/dev/sda3 7999488 1953523711 1945524224 927.7G Linux RAID<br></div>
Have you run some partition seek tool on this disk? It is striking that my 1st partition starts on sector 2048, while yours start on 4096. But mine has a raid array header of 2048 sectors, and the datasize of the array is 3995520, which exactly matches your partition size.
So a tool searching for filesystems could have done this. Device Role : Active device 0<br> Array State : A. ('A' == active, '.' == missing)<br>
mdadm --examine /dev/sda1<br>/dev/sda1:<br> Magic : a92b4efc<br> Version : 1.2<br> Feature Map : 0x0<br> Array UUID : 87cde249:5bd2cad3:9d19e8e2:96d47bda<br> Name : NAS520:0 (local to host NAS520)<br> Creation Time : Fri Aug 28 19:06:56 2015<br> Raid Level : raid1<br> Raid Devices : 2<br><br> Avail Dev Size : 3995648 (1951.33 MiB 2045.77 MB)<br> Array Size : 1997760 (1951.27 MiB 2045.71 MB)<br> Used Dev Size : 3995520 (1951.27 MiB 2045.71 MB)<br> Data Offset : 2048 sectors<br> Super Offset : 8 sectors<br> State : clean<br> Device UUID : e1c1bd04:3bcd2c01:e7ee3d59:7c7152d6<br><br> Update Time : Sat Nov 16 09:46:06 2019<br> Checksum : d32e8be3 - correct<br> Events : 2434<br>If that is the case, I think you can clone my partition table, except for the end of partition 3. That should be the highest available sector number.
0 -
This was my 'sysvol'
/i-data/e162cf92/
<div> # mdadm --examine /dev/sda1<br>/dev/sda1:<br> MBR Magic : aa55<br>Partition[0] : 3995520 sectors at 63 (type 85)<br>~ # mdadm --examine /dev/sda2<br>mdadm: No md superblock detected on /dev/sda2.<br><br></div><br>
Above is the current state. As is the ouput of my previous post.Below the output is posted of the previous state before I converted partitions back from logical to primary.# cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]<br>unused devices: <none><br><br>fdisk -l<br><br>Disk /dev/loop0: 144 MiB, 150994944 bytes, 294912 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock0: 256 KiB, 262144 bytes, 512 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock1: 512 KiB, 524288 bytes, 1024 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock2: 256 KiB, 262144 bytes, 512 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock3: 10 MiB, 10485760 bytes, 20480 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock4: 10 MiB, 10485760 bytes, 20480 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock5: 110 MiB, 115343360 bytes, 225280 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock6: 10 MiB, 10485760 bytes, 20480 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock7: 110 MiB, 115343360 bytes, 225280 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/mtdblock8: 6 MiB, 6291456 bytes, 12288 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 4096 bytes<br>I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br>Disklabel type: dos<br>Disk identifier: 0x1394754f<br><br>Device Boot Start End Sectors Size Id Type<br>/dev/sda1 4033 3999615 3995583 1.9G f W95 Ext'd (LBA)<br>/dev/sda2 8261632 3907028607 3898766976 1.8T 83 Linux<br>/dev/sda5 4096 3999615 3995520 1.9G 83 Linux<br><br>Partition 2 does not start on physical sector boundary.<br>Partition table entries are not in disk order.<br><br><br> mdadm --examine /dev/sda1<br>/dev/sda1:<br> MBR Magic : aa55<br>Partition[0] : 3995520 sectors at 63 (type 83)<br><br># mdadm --examine /dev/sda2<br>mdadm: No md superblock detected on /dev/sda2.<br>mdadm: No md superblock detected on /dev/sda5.<br><br><br># mdadm --assemble --scan<br>mdadm main: failed to get exclusive lock on mapfile<br>mdadm: No arrays found in config file or automatically
Are Zyxel NASes formatted as ext2 or ext4?I don't know how to clone partition table, be I would give it a try.
0 -
Device Boot Start End Sectors Size Id Type<br>/dev/sda1 4033 3999615 3995583 1.9G f W95 Ext'd (LBA)<br>/dev/sda2 8261632 3907028607 3898766976 1.8T 83 Linux<br>/dev/sda5 4096 3999615 3995520 1.9G 83 Linux
You have only 2 partitions here. Sda1 is an extended partition, which contains one logical partition sda5. You can see that on their start- and end sectors. And sda2 is a primary partition, containing your data. (Somehow, I hope).Let's have a look if a raid header can be found at sectors 2048 and 7999488, which is where they should be, according to my partition table.Create a loopdevice at the given offset of sda, and let mdadm have a look at it:<div>losetup -o 2097152 /dev/loop1 /dev/sda</div><div><br></div><div>mdadm --examine /dev/loop1</div><div><br></div><div>losetup -d /dev/loop1</div><div><br></div><div><br></div><div><div>losetup -o 4095737856 /dev/loop1 /dev/sda</div><div><br></div><div>mdadm --examine /dev/loop1</div><div><br></div><div>losetup -d /dev/loop1</div><div></div></div>
Are Zyxel NASes formatted as ext2 or ext4?The 520 ext4. Older ones used ext3, xfs or reiserfs.I don't know how to clone partition table, be I would give it a try.That is the next step.
1 -
# losetup -o 2097152 /dev/loop1 /dev/sdamdadm --examine /dev/loop1losetup -d /<br>dev/loop1<br>losetup: unrecognized option '--examine'<br>BusyBox v1.19.4 (2018-11-08 14:45:19 CST) multi-call binary.<br><br>Usage: losetup [-o OFS] LOOPDEV FILE - associate loop devices<br> losetup -d LOOPDEV - disassociate<br> losetup [-f] - show<br><br>~ # losetup -o 4095737856 /dev/loop1 /dev/sdamdadm --examine /dev/loop1losetup -<br>d /dev/loop1<br>losetup: unrecognized option '--examine'<br>BusyBox v1.19.4 (2018-11-08 14:45:19 CST) multi-call binary.<br><br>Usage: losetup [-o OFS] LOOPDEV FILE - associate loop devices<br> losetup -d LOOPDEV - disassociate<br> losetup [-f] - show<br><br>~ #
I've got an error.losetup<br>/dev/loop0: 0 /firmware/mnt/sysdisk/sysdisk.img
0 -
Sorry. Stupid forumsoftware bôrked up my commands. I've edited my post.
0 -
# losetup -o 2097152 /dev/loop1 /dev/sda<br>~ # mdadm --examine /dev/loop1<br>mdadm: No md superblock detected on /dev/loop1.<br>~ # losetup -d /dev/loop1<br>~ #<br>~ # losetup -o 4095737856 /dev/loop1 /dev/sda<br>~ # mdadm --examine /dev/loop1<br>/dev/loop1:<br> Magic : a92b4efc<br> Version : 1.2<br> Feature Map : 0x0<br> Array UUID : e162cf92:8d8b0390:a6f1e2ac:d411daa8<br> Name : NAS520:2 (local to host NAS520)<br> Creation Time : Thu Dec 3 10:29:20 2015<br> Raid Level : raid1<br> Raid Devices : 1<br><br> Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)<br> Array Size : 1949383488 (1859.08 GiB 1996.17 GB)<br> Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)<br> Data Offset : 262144 sectors<br> Super Offset : 8 sectors<br> State : clean<br> Device UUID : 96ebf461:9976417b:13438673:58d3f678<br><br> Update Time : Thu Nov 14 18:16:56 2019<br> Checksum : d7b1b221 - correct<br> Events : 215<br><br><br> Device Role : Active device 0<br> Array State : A ('A' == active, '.' == missing)<br>~ # losetup -d /dev/loop1<br><br><br>
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 148 Nebula Ideas
- 96 Nebula Status and Incidents
- 5.7K Security
- 262 USG FLEX H Series
- 271 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.4K Consumer Product
- 249 Service & License
- 387 News and Release
- 84 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.5K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 73 Security Highlight