NAS326 - Degraded issue. Volume 1 got degraded after switching to RAID 1
Adrian01
Posts: 6
After a while I decided to add a second disk to use in RAID 1 for the volume 1, here's some specs:
Disk 1 - Volume 1 - 500GB, 43GB used
Disk 2 - Volume 2 - 1TB blank brand new
I first installed the new 1TB drive and than changed the Volume 1 raid mode to RAID 1.
The next window showed me all the correct options, disk 1 would be changed to RAID 1 over disk 2 and about half of disk 2 would be wasted. That's fine, so I moved on.
The NAS started, the disk volume page showed a progress bar of something about 3h left and 0.3% growing. Soon after it stopped and Volume 1 became "degraded", also no repair option is available (manage button is grayed out).
How can I fix it? Any help?
If I reboot the RAID 1 creating process starts working again for few minutes before volume 1 gets "degraded" again.
Somewhere on this forum I red about an user swapping the disk bays and it seemed fixing the issue.
Disk 1 - Volume 1 - 500GB, 43GB used
Disk 2 - Volume 2 - 1TB blank brand new
I first installed the new 1TB drive and than changed the Volume 1 raid mode to RAID 1.
The next window showed me all the correct options, disk 1 would be changed to RAID 1 over disk 2 and about half of disk 2 would be wasted. That's fine, so I moved on.
The NAS started, the disk volume page showed a progress bar of something about 3h left and 0.3% growing. Soon after it stopped and Volume 1 became "degraded", also no repair option is available (manage button is grayed out).
How can I fix it? Any help?
If I reboot the RAID 1 creating process starts working again for few minutes before volume 1 gets "degraded" again.
Somewhere on this forum I red about an user swapping the disk bays and it seemed fixing the issue.
0
Accepted Solution
-
I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.Problem is that the raid manager is pretty dumb. It just copies the whole surface of disk 1 to disk 2, and when it encounters an error, it stops. It doesn't know about filesystems, and so it doesn't know if the sector is actually used.As your disk is mainly empty, odds are that the problem sector is not used. So if you just overwrite every unused sector, it might be overwritten, solving the issue.Fortunately that is quite simple. Login in ssh, as admin, and executecd /i-data/sysvol/admin/dd if=/dev/zero of=bigfile bs=32MThat will copy the output of /dev/zero, (an endless stream of zero's) to a file 'bigfile' in the admin share. You can see it growing in explorer. When the disk is full, dd will be aborted, and you will have a file 'bigfile' of around 450GB, which can be deleted.rm bigfileNow the major part of the unused sectors is overwritten with zero's. Reboot the NAS and see if it completes the raid building.The generation of the file will take about 1.5 hours, I think.1
All Replies
-
I forgot to say that drives status is good and green both via S.M.A.R.T. and disk manager
0 -
Do you backup the data?
I think you can re-create a volume with Raid1 type if you have backup data.0 -
I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.Problem is that the raid manager is pretty dumb. It just copies the whole surface of disk 1 to disk 2, and when it encounters an error, it stops. It doesn't know about filesystems, and so it doesn't know if the sector is actually used.As your disk is mainly empty, odds are that the problem sector is not used. So if you just overwrite every unused sector, it might be overwritten, solving the issue.Fortunately that is quite simple. Login in ssh, as admin, and executecd /i-data/sysvol/admin/dd if=/dev/zero of=bigfile bs=32MThat will copy the output of /dev/zero, (an endless stream of zero's) to a file 'bigfile' in the admin share. You can see it growing in explorer. When the disk is full, dd will be aborted, and you will have a file 'bigfile' of around 450GB, which can be deleted.rm bigfileNow the major part of the unused sectors is overwritten with zero's. Reboot the NAS and see if it completes the raid building.The generation of the file will take about 1.5 hours, I think.1
-
ikubuf said:Do you backup the data?
I think you can re-create a volume with Raid1 type if you have backup data.
How can I re-create the volume? Deleting the existing one first?
Mijzelf said:I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.
Right now I can't do it because the NAS is always in use at the office. I'll try later on, as soon as I can.
How can I log in to ssh? Is that a plug-in or what?
Thanks for your time.0 -
Mijzelf said:I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.Problem is that the raid manager is pretty dumb. It just copies the whole surface of disk 1 to disk 2, and when it encounters an error, it stops. It doesn't know about filesystems, and so it doesn't know if the sector is actually used.As your disk is mainly empty, odds are that the problem sector is not used. So if you just overwrite every unused sector, it might be overwritten, solving the issue.Fortunately that is quite simple. Login in ssh, as admin, and executecd /i-data/sysvol/admin/dd if=/dev/zero of=bigfile bs=32MThat will copy the output of /dev/zero, (an endless stream of zero's) to a file 'bigfile' in the admin share. You can see it growing in explorer. When the disk is full, dd will be aborted, and you will have a file 'bigfile' of around 450GB, which can be deleted.rm bigfileNow the major part of the unused sectors is overwritten with zero's. Reboot the NAS and see if it completes the raid building.The generation of the file will take about 1.5 hours, I think.
I didn't know where to start from to fix this issue but your tip was perfectly on point. It took about 2 hours and than the error was gone. I now have the 2 disks perfectly on raid 1 and both green flags.
Thanks a lot, you saved me and our work at the office. ***** (five stars)
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 149 Nebula Ideas
- 96 Nebula Status and Incidents
- 5.7K Security
- 264 USG FLEX H Series
- 271 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 41 Wireless Ideas
- 6.4K Consumer Product
- 249 Service & License
- 387 News and Release
- 84 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.5K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 73 Security Highlight