NAS326 - Degraded issue. Volume 1 got degraded after switching to RAID 1

After a while I decided to add a second disk to use in RAID 1 for the volume 1, here's some specs:

Disk 1 - Volume 1 - 500GB, 43GB used
Disk 2 - Volume 2 - 1TB blank brand new

I first installed the new 1TB drive and than changed the Volume 1 raid mode to RAID 1.

The next window showed me all the correct options, disk 1 would be changed to RAID 1 over disk 2 and about half of disk 2 would be wasted. That's fine, so I moved on.

The NAS started, the disk volume page showed a progress bar of something about 3h left and 0.3% growing. Soon after it stopped and Volume 1 became "degraded", also no repair option is available (manage button is grayed out).

How can I fix it? Any help?
If I reboot the RAID 1 creating process starts working again for few minutes before volume 1 gets "degraded" again.

Somewhere on this forum I red about an user swapping the disk bays and it seemed fixing the issue.

Accepted Solution

  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    edited July 2022 Answer ✓
    I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.
    Problem is that the raid manager is pretty dumb. It just copies the whole surface of disk 1 to disk 2, and when it encounters an error, it stops. It doesn't know about filesystems, and so it doesn't know if the sector is actually used.
    As your disk is mainly empty, odds are that the problem sector is not used. So if you just overwrite every unused sector, it might be overwritten, solving the issue.
    Fortunately that is quite simple. Login in ssh, as admin, and execute

    cd /i-data/sysvol/admin/
    dd if=/dev/zero of=bigfile bs=32M

    That will copy the output of /dev/zero, (an endless stream of zero's) to a file 'bigfile' in the admin share. You can see it growing in explorer. When the disk is full, dd will be aborted, and you will have a file 'bigfile' of around 450GB, which can be deleted.

    rm bigfile

    Now the major part of the unused sectors is overwritten with zero's. Reboot the NAS and see if it completes the raid building.

    The generation of the file will take about 1.5 hours, I think.

All Replies

  • Adrian01
    Adrian01 Posts: 6
    Friend Collector First Comment
    I forgot to say that drives status is good and green both via S.M.A.R.T. and disk manager



  • ikubuf
    ikubuf Posts: 134  Ally Member
    First Anniversary 10 Comments Friend Collector First Answer
    Do you backup the data?
    I think you can re-create a volume with Raid1 type if you have backup data.
  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    edited July 2022 Answer ✓
    I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.
    Problem is that the raid manager is pretty dumb. It just copies the whole surface of disk 1 to disk 2, and when it encounters an error, it stops. It doesn't know about filesystems, and so it doesn't know if the sector is actually used.
    As your disk is mainly empty, odds are that the problem sector is not used. So if you just overwrite every unused sector, it might be overwritten, solving the issue.
    Fortunately that is quite simple. Login in ssh, as admin, and execute

    cd /i-data/sysvol/admin/
    dd if=/dev/zero of=bigfile bs=32M

    That will copy the output of /dev/zero, (an endless stream of zero's) to a file 'bigfile' in the admin share. You can see it growing in explorer. When the disk is full, dd will be aborted, and you will have a file 'bigfile' of around 450GB, which can be deleted.

    rm bigfile

    Now the major part of the unused sectors is overwritten with zero's. Reboot the NAS and see if it completes the raid building.

    The generation of the file will take about 1.5 hours, I think.

  • Adrian01
    Adrian01 Posts: 6
    Friend Collector First Comment
    ikubuf said:
    Do you backup the data?
    I think you can re-create a volume with Raid1 type if you have backup data.
    I have an online backup of the entire disk 1 drive.
    How can I re-create the volume? Deleting the existing one first?

    Mijzelf said:
    I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.
    Thanks for your help, I have to try it.
    Right now I can't do it because the NAS is always in use at the office. I'll try later on, as soon as I can.

    How can I log in to ssh? Is that a plug-in or what?

    Thanks for your time.
  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    You can enable the ssh server in the webinterface. (Control panel->Network->Terminal). On your PC you have to use an ssh client. On Windows you can use PuTTY for that.
  • Adrian01
    Adrian01 Posts: 6
    Friend Collector First Comment
    edited July 2022
    Mijzelf said:
    I *think* you have a 'Current Pending Sector' error. That can mean that a bit is flipped on a sector, making the checksum fail. So the disk cannot (reliable) read that sector. That doesn't have to be problem, if you just overwrite it, it's fine.
    Problem is that the raid manager is pretty dumb. It just copies the whole surface of disk 1 to disk 2, and when it encounters an error, it stops. It doesn't know about filesystems, and so it doesn't know if the sector is actually used.
    As your disk is mainly empty, odds are that the problem sector is not used. So if you just overwrite every unused sector, it might be overwritten, solving the issue.
    Fortunately that is quite simple. Login in ssh, as admin, and execute

    cd /i-data/sysvol/admin/
    dd if=/dev/zero of=bigfile bs=32M

    That will copy the output of /dev/zero, (an endless stream of zero's) to a file 'bigfile' in the admin share. You can see it growing in explorer. When the disk is full, dd will be aborted, and you will have a file 'bigfile' of around 450GB, which can be deleted.

    rm bigfile

    Now the major part of the unused sectors is overwritten with zero's. Reboot the NAS and see if it completes the raid building.

    The generation of the file will take about 1.5 hours, I think.

    In worked! Thanks friend.

    I didn't know where to start from to fix this issue but your tip was perfectly on point. It took about 2 hours and than the error was gone. I now have the 2 disks perfectly on raid 1 and both green flags.





    Thanks a lot, you saved me and our work at the office.  ***** (five stars)

Consumer Product Help Center