RAID1 degraded on NSA325 v2 after file deletion

siraph
siraph Posts: 4  Freshman Member
First Comment

I've been using my NSA325 for many years right now, never replaced the hard drive, always worked fine. It is a two bay storage, and I configured a RAID1 volume.

Recently I was running out of free disk space, so I moved many files (approx 200GB of data) to another storage through rsync, and then deleted files folder by folder.

Some days after that, I checked my RAID1 status in the web admin interface, and it says it is degraded. In the volume detail, I just see disk1. Tried scan and repair: scan is not showing any error, repair stays stuck on "Recovering 0.0%" for many time, and then volume shows Degraded once more. Disk leds on the external of the NAS where both green, but now after some scan and repair, disk2 shows red…

Checking S.M.A.R.T. , both disks are shown healthy, and as far as I can read smart summaries, both disk are working good and not faulty.

How can I solve my problem? Thanks

All Replies

  • siraph
    siraph Posts: 4  Freshman Member
    First Comment

    Reading various similar posts, I'm giving some additional information.

    I mounted two 2TB drives, WD RED, to form my RADI1.

    Firmware: V4.81(AALS.1)

    Here are some screens of the web interface

    It has happened that in above section, I could see only disk1, tried a Repair, never gone beyond "Recovering 0.04%". Led turned red for some time, than back green. But always degraded…

    The two drives in SMART page

    disk 1 full summary

    disk2 full summary

    recent logs shown in web interface

    Shell commands:

    ~$ cat /proc/partitions
    major minor #blocks name

    7 0 143360 loop0
    8 0 1953514584 sda
    8 1 514048 sda1
    8 2 1952997952 sda2
    31 0 1024 mtdblock0
    8 16 1953514584 sdb
    8 17 514048 sdb1
    8 18 1952997952 sdb2
    31 1 512 mtdblock1
    31 2 512 mtdblock2
    31 3 512 mtdblock3
    31 4 10240 mtdblock4
    31 5 10240 mtdblock5
    31 6 48896 mtdblock6
    31 7 10240 mtdblock7
    31 8 48896 mtdblock8
    9 0 1952996792 md0

    ~$ cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1]
    md0 : active raid1 sdb22 sda2[0]
    1952996792 blocks super 1.2 [2/1] [U_]

    ~$ mdadm --examine /dev/sda
    -sh: mdadm: command not found

    Maybe it's all ok… But if one (or both) my drives are failing, I would like to know something more, before starting to buy another drive and/or NAS.

  • Mijzelf
    Mijzelf Posts: 2,786  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    The Current_Pending_Sector with a raw value of 2 could cause the disk to be dropped from the array. But it cannot stop a running repair, and after repairing it should be gone.

    Your log shows that repairing has started, but /proc/mdstat doesn't show that. It should show the repairing status, and a progression state. (Sorry, can't remember the exact layout, but you'll recognize it if you see it)

    So apparently the repair has already stopped, and the firmware log hasn't noticed. Maybe the kernel log shows a reason. Execute 'dmesg' to read the kernel log.

  • siraph
    siraph Posts: 4  Freshman Member
    First Comment

    Thanks for the reply, I'm sending dmesg output attached to this post

Consumer Product Help Center