Expanding RAID volume on NAS326 is not working
Gunslinger
Posts: 5 Freshman Member
My NAS326 had previously two 1.5TB disks on RAID 1. After one them busted, I decided to replace them both with 2TB WD Red disks. I first replaced the broken disk, and then repaired the RAID volume. I then replaced the other disk, and repaired the volume again. I now had fully functioning 1.5TB RAID 1 volume on 2TB disks. Since then, I've tried to expand the volume to 2TB, but that doesn't seem to work.
I've tried to restart the expanding several times, but it seems to stay forever on "expanding" status. Overview of the store manager says "Volume1 is expanding." If I check Volume at the Internal Storage, the status of the volume is simply "Expanding". There's spinner running around, but no time estimation or percentage information available.
First time I restarted the expansion after a day. Second time after a week. Third time after three weeks. Now, the expansion has been running for a solid full month. I'm getting pretty confident, that the expansion is not working at all, and will not finish no matter how long I wait. Any ideas how to proceed / get things working?
#NAS_Jun_2019
I've tried to restart the expanding several times, but it seems to stay forever on "expanding" status. Overview of the store manager says "Volume1 is expanding." If I check Volume at the Internal Storage, the status of the volume is simply "Expanding". There's spinner running around, but no time estimation or percentage information available.
First time I restarted the expansion after a day. Second time after a week. Third time after three weeks. Now, the expansion has been running for a solid full month. I'm getting pretty confident, that the expansion is not working at all, and will not finish no matter how long I wait. Any ideas how to proceed / get things working?
#NAS_Jun_2019
0
Comments
-
Can you enable the ssh server, login over ssh, and post the output of
<div>cat /proc/partitions</div><div><br></div><div>cat /proc/mdstat</div><div><br></div><div>su</div><div><br></div><div>mdadm --examine /dev/sd[ab]3</div>
1 -
Sure! Here it goes:
cat /proc/partitionsPersonalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] md2 : active raid1 sda3[3] sdb3[2] 1949383680 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda2[3] sdb2[2] 1998784 blocks super 1.2 [2/2] [UU] md0 : active raid1 sda1[3] sdb1[2] 1997760 blocks super 1.2 [2/2] [UU] unused devices: <none> </code>major minor #blocks name 7 0 146432 loop0 31 0 2048 mtdblock0 31 1 2048 mtdblock1 31 2 10240 mtdblock2 31 3 15360 mtdblock3 31 4 108544 mtdblock4 31 5 15360 mtdblock5 31 6 108544 mtdblock6 8 0 1953514584 sda 8 1 1998848 sda1 8 2 1999872 sda2 8 3 1949514752 sda3 8 16 1953514584 sdb 8 17 1998848 sdb1 8 18 1999872 sdb2 8 19 1949514752 sdb3 9 0 1997760 md0 9 1 1998784 md1 9 2 1949383680 md2 </pre><div>cat /proc/mdstat<br><pre class="CodeBlock"><code>
su
mdadm --examine /dev/sd[ab]3/dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : ae61f12f:b8df7792:b05fd5f6:8c108264 Name : NAS326:2 Creation Time : Sun Jan 22 22:37:47 2017 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB) Array Size : 1949383680 (1859.08 GiB 1996.17 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 4a7ca21a:b05d62be:070099a4:fc8f962e Update Time : Mon Jun 17 11:34:46 2019 Checksum : 5ec43eb8 - correct Events : 900 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) /dev/sdb3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : ae61f12f:b8df7792:b05fd5f6:8c108264 Name : NAS326:2 Creation Time : Sun Jan 22 22:37:47 2017 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB) Array Size : 1949383680 (1859.08 GiB 1996.17 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 664c54ec:9b44985a:79ab8811:2c6fdd0c Update Time : Mon Jun 17 11:34:46 2019 Checksum : 17e28060 - correct Events : 900 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing)
0 -
The partitions and the raid array are already resized. So only the filesystem has to be done.
<div>su</div><div><br></div><div>resize2fs /dev/md2</div>
1 -
I tried that, but I'm getting an error:
~ # umount /dev/md2 umount: /i-data/ae61f12f: target is busy (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1).) </code>~ # resize2fs /dev/md2 resize2fs 1.42.12 (29-Aug-2014) The filesystem can be resize to 487345920 blocks.chk_expansible=0 Filesystem at /dev/md2 is mounted on /i-data/ae61f12f; on-line resizing required old_desc_blocks = 88, new_desc_blocks = 117 resize2fs: Permission denied to resize filesystem </pre><div>And when I try to unmount the drive, I get another error:<br><pre class="CodeBlock"><code>
even though I have disabled the sharing on my local network.
0 -
In that case there is a filesystem error, so resize2fs wants to run fsck first, and for that the filesystem has to be umounted.That is a bit hard, but you can let the firmware do it. Edit the file /etc/init.d/rc.shutdown. The box has one usable editor, vi. It's a nasty editor. You can get to edit mode by pressing i. After having made your adjustments, press <ESC>:wq to save the file and exit the editor.Search for '# swapoff'. Below that line add
<div>/sbin/telnetd</div><div><br></div><div>/bin/sh</div><div></div>
Save the file, and shutdown the box (command poweroff). After you lost your ssh connection, you should be able to login again over telnet.Now execute<div>umount /i-data/sysvol</div><div><br></div><div>e2fsck -f /dev/md2</div><div><br></div><div>resize2fs /dev/md2<br></div>
After that you can continue shutdown withkillall -9 /bin/sh
1 -
Everything worked like a charm, right until the last step! After executing that
killall -9 /bin/sh
I got just information that there were no processes to stop. After that, I tried to continue the shutdown by 'poweroff'. Telnet connection to NAS seemed to disconnect, but the NAS itself did not shut down. Can I just force shutdown from power button, or how should I try to restart it so I don't mess the file system?0 -
You can simpy cut the power. The filesystem is not mounted.
1 -
Yup, that's it. Working like a charm now. Thank you so, so much!0
-
Hi. Im having same issue than Gunslinger had. I was able to follow instructions all the way to the point where Mijzelf ask to edit rc.shutdown file. Could you give more detailed instructions how to do it. How to find the file, how to edit.... Im real newbie with these. Do I need to enable also telnet service from control panel of nas to be able to login over telnet? Thank you.0
-
It doesn't matter if you have enable the telnet daemon or not. It's stopped at shutdown anyway. The injected code will be executed /after/ the firmware daemons have been stopped.BTW, this stupid forum software has removed part of the instructions. The code to add is/sbin/telnetd
/bin/shThe first line starts a telnet daemon, the second a shell (on serial port, I think), and is blocking. Without that 2nd line a telnet daemon is started, and the box powers off.About finding and editing that file, the command to do so isvi /etc/init.d/rc.shutdownand a brief instruction is in the post upside.
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 144 Nebula Ideas
- 94 Nebula Status and Incidents
- 5.6K Security
- 237 USG FLEX H Series
- 267 Security Ideas
- 1.4K Switch
- 71 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.3K Consumer Product
- 247 Service & License
- 384 News and Release
- 83 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.2K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 83 About Community
- 71 Security Highlight