HELP! NSA325v2 RAID1 degraded!

Options
doktor
doktor Posts: 14  Freshman Member
First Comment
Hello, since the homeforum is being migrated I don't know if this is the correct place to repost an open question I have. Please advice if I should post it at another place. Anyway here is the original question and one additional step I took after the kind recommendation of user Mijzelf:

"I am trying to upgrade my current 2x2TB Hard disks to 2x8TB in RAID1.

So, first things first I made a backup of my 2TB files and then proceeded as follows:

1. Shutdown NSA325v2 (NAS for short).
2. Removed 2TB disk from tray 1 and inserted 8TB disk in tray 1.
3. Booted up and waited to see what happens.
4. NAS recognizes the new disk and reports array as degraded.
5. I select repair and wait...
6. After some hours, I see that the reconstruction process proceeds up to about 60% (last time I checked).
7. Next morning I check, reconstruction has stopped and array is still degraded.
8. I try to select repair but nothing happens.

So I decide to put the original 2TB disk back in tray 1, at least to restore the original array. I repeat the above steps having now removed the 8TB disk and having put the original, untouched 2TB back in tray 1. The NAS now reports this array as degraded and after reconstruction nothing happens and I can't ask it to try again.

So now I can't even go back to where I was in the start! I have a 2TB degraded array and I don't even understand why.

Please help!"

The additional step I took was to execute some command lines and here is the output:

cat /proc/mdstat

Personalities : [linear] [raid0] [raid1]

md0 : active raid1 sda2[2] sdb2[1]

      1952996792 blocks super 1.2 [2/1] [_U]

 

unused devices: <none>

 

 

 

mdadm --examine /dev/md0

mdadm: No md superblock detected on /dev/md0.

 

 

mdadm --examine /dev/sd[ab]2

/dev/sda2:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x2

     Array UUID : 4dccadb5:d1450afa:21a77618:e76638ad

           Name : NSA325-v2:0  (local to host NSA325-v2)

  Creation Time : Tue Sep 30 06:27:56 2014

     Raid Level : raid1

   Raid Devices : 2

 

 Avail Dev Size : 7813525504 (7451.56 GiB 8001.05 GB)

     Array Size : 1952996792 (1862.52 GiB 1999.87 GB)

  Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)

    Data Offset : 2048 sectors

   Super Offset : 8 sectors

Recovery Offset : 2721861632 sectors

          State : clean

    Device UUID : 41eabffd:6dbe66b3:b2bcbcfc:e88151f2

 

    Update Time : Thu Jan  7 10:15:33 2021

       Checksum : 4b25b258 - correct

         Events : 2013919

 

 

   Device Role : Active device 0

   Array State : AA ('A' == active, '.' == missing)

/dev/sdb2:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : 4dccadb5:d1450afa:21a77618:e76638ad

           Name : NSA325-v2:0  (local to host NSA325-v2)

  Creation Time : Tue Sep 30 06:27:56 2014

     Raid Level : raid1

   Raid Devices : 2

 

 Avail Dev Size : 1952996928 (1862.52 GiB 1999.87 GB)

     Array Size : 1952996792 (1862.52 GiB 1999.87 GB)

  Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)

    Data Offset : 2048 sectors

   Super Offset : 8 sectors

          State : clean

    Device UUID : af28ab52:2ec071ee:4d0b0550:6b745145

 

    Update Time : Thu Jan  7 10:15:33 2021

       Checksum : 2486c01e - correct

         Events : 2013919

 

 

   Device Role : Active device 1

   Array State : AA ('A' == active, '.' == missing)

 

 

 

fdisk -l

 

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.

 

 

Disk /dev/sda: 8001.5 GB, 8001563222016 bytes

255 heads, 63 sectors/track, 972801 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1               1      267350  2147483647+  ee  GPT

 

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x36035ebe

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1          64      514048+  83  Linux

/dev/sdb2              65      243201  1952997952+  20  Unknown

 

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

 

Note: sector size is 4096 (not 512)

 

Disk /dev/sdc: 4000.7 GB, 4000787030016 bytes

255 heads, 63 sectors/track, 60800 cylinders

Units = cylinders of 16065 * 4096 = 65802240 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1       60801  3907018580   ee  GPT




All Replies

  • workinto
    workinto Posts: 44  Freshman Member
    First Anniversary First Answer First Comment
    edited January 2021
    Options
    Which model of disk are you current using on 2TB and 8TB?
    If you have do the backup before, you can reset the device
  • Mijzelf
    Mijzelf Posts: 2,639  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    This data is generated at virtually the same moment? It's not consistent.

    /proc/mdadm shows the array as degraded, and AFAIK that is what the webinterface uses for the status. But the members itself think they are a healthy happy array.

    md0 : active raid1 sda2[2] sdb2[1]
          1952996792 blocks super 1.2 [2/⁠1] [_U]

       Device Role : Active device 0
       Array State : AA ('A' == active, '.' == missing)

       Device Role : Active device 1
       Array State : AA ('A' == active, '.' == missing)

    Did you already reboot after rebuilding the array?
    I made a mistake in my instructions,

    mdadm -⁠-⁠examine /⁠dev/⁠md0

    should be

    mdadm -⁠-⁠detail /⁠dev/⁠md0

    If a reboot doesn't work, can you give this info too?
  • doktor
    doktor Posts: 14  Freshman Member
    First Comment
    Options
    Hello again and thanks for looking into this! In the meantime of the forum being transferred and inaccessible I decided after all to migrate to a Synology DS220j. So in the current condition I still have only one 2TB disk in the NSA325v2, since the 2x8TB are now in the Synology. Also, I am planning to decommission the NSA325v2 today, so I gave it another last try with the command you suggested, although only with one 2TB disk installed:

     mdadm --detail /dev/md0

    /dev/md0:
            Version : 1.2
      Creation Time : Tue Sep 30 06:27:56 2014
         Raid Level : raid1
         Array Size : 1952996792 (1862.52 GiB 1999.87 GB)
      Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)
       Raid Devices : 2
      Total Devices : 1
        Persistence : Superblock is persistent

        Update Time : Thu Jan 21 09:06:30 2021
              State : clean, degraded
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 0
      Spare Devices : 0

               Name : NSA325-v2:0  (local to host NSA325-v2)
               UUID : 4dccadb5:d1450afa:21a77618:e76638ad
             Events : 2075073

        Number   Major   Minor   RaidDevice State
           0       0        0        0      removed
           1       8        2        1      active sync   /dev/sda2


    Before this I tried several reboots as you suggested and I was able to build the 2x8TB array, but only from scratch, that means not with the 2TB installed, but by removing the 2TB disks completely and installing the 2x8TB blank disks. That worked indeed, but it defied the purpose of being able to upgrade from an existing environment. Since I would have to manually copy my files from the backup to the newly created 2x8TB array it was just the same to me to upgrade to a new NAS (OK, I had to spend some money, but the Zyxel was outdated anyway...)

    So I would like to thank everybody, and especially Mijzelf for all of his wonderful support!

  • doktor
    doktor Posts: 14  Freshman Member
    First Comment
    Options
    Hello again and thanks for looking into this! In the meantime of the forum being transferred and inaccessible I decided after all to migrate to a Synology DS220j. So in the current condition I still have only one 2TB disk in the NSA325v2, since the 2x8TB are now in the Synology. Also, I am planning to decommission the NSA325v2 today, so I gave it another last try with the command you suggested, although only with one 2TB disk installed:

     mdadm --detail /dev/md0

    /dev/md0:
            Version : 1.2
      Creation Time : Tue Sep 30 06:27:56 2014
         Raid Level : raid1
         Array Size : 1952996792 (1862.52 GiB 1999.87 GB)
      Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)
       Raid Devices : 2
      Total Devices : 1
        Persistence : Superblock is persistent

        Update Time : Thu Jan 21 09:06:30 2021
              State : clean, degraded
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 0
      Spare Devices : 0

               Name : NSA325-v2:0  (local to host NSA325-v2)
               UUID : 4dccadb5:d1450afa:21a77618:e76638ad
             Events : 2075073

        Number   Major   Minor   RaidDevice State
           0       0        0        0      removed
           1       8        2        1      active sync   /dev/sda2


    Before this I tried several reboots as you suggested and I was able to build the 2x8TB array, but only from scratch, that means not with the 2TB installed, but by removing the 2TB disks completely and installing the 2x8TB blank disks. That worked indeed, but it defied the purpose of being able to upgrade from an existing environment. Since I would have to manually copy my files from the backup to the newly created 2x8TB array it was just the same to me to upgrade to a new NAS (OK, I had to spend some money, but the Zyxel was outdated anyway...)

    So I would like to thank everybody, and especially Mijzelf for all of his wonderful support!

  • Mijzelf
    Mijzelf Posts: 2,639  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    This data is generated at virtually the same moment? It's not consistent.

    /proc/mdadm shows the array as degraded, and AFAIK that is what the webinterface uses for the status. But the members itself think they are a healthy happy array.

    md0 : active raid1 sda2[2] sdb2[1]
          1952996792 blocks super 1.2 [2/⁠1] [_U]

       Device Role : Active device 0
       Array State : AA ('A' == active, '.' == missing)

       Device Role : Active device 1
       Array State : AA ('A' == active, '.' == missing)

    Did you already reboot after rebuilding the array?
    I made a mistake in my instructions,

    mdadm -⁠-⁠examine /⁠dev/⁠md0

    should be

    mdadm -⁠-⁠detail /⁠dev/⁠md0

    If a reboot doesn't work, can you give this info too?
  • Mijzelf
    Mijzelf Posts: 2,639  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Could it be this one?

Consumer Product Help Center