HELP! NSA325v2 RAID1 degraded!
The additional step I took was to execute some command lines and here is the output:
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda2[2] sdb2[1]
1952996792 blocks super 1.2 [2/1] [_U]
unused devices: <none>
mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.
mdadm --examine /dev/sd[ab]2
/dev/sda2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x2
Array UUID : 4dccadb5:d1450afa:21a77618:e76638ad
Name : NSA325-v2:0 (local to host NSA325-v2)
Creation Time : Tue Sep 30 06:27:56 2014
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 7813525504 (7451.56 GiB 8001.05 GB)
Array Size : 1952996792 (1862.52 GiB 1999.87 GB)
Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Recovery Offset : 2721861632 sectors
State : clean
Device UUID : 41eabffd:6dbe66b3:b2bcbcfc:e88151f2
Update Time : Thu Jan 7 10:15:33 2021
Checksum : 4b25b258 - correct
Events : 2013919
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing)
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4dccadb5:d1450afa:21a77618:e76638ad
Name : NSA325-v2:0 (local to host NSA325-v2)
Creation Time : Tue Sep 30 06:27:56 2014
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1952996928 (1862.52 GiB 1999.87 GB)
Array Size : 1952996792 (1862.52 GiB 1999.87 GB)
Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : af28ab52:2ec071ee:4d0b0550:6b745145
Update Time : Thu Jan 7 10:15:33 2021
Checksum : 2486c01e - correct
Events : 2013919
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing)
fdisk -l
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee GPT
Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x36035ebe
Device Boot Start End Blocks Id System
/dev/sdb1 1 64 514048+ 83 Linux
/dev/sdb2 65 243201 1952997952+ 20 Unknown
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
Note: sector size is 4096 (not 512)
Disk /dev/sdc: 4000.7 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 60800 cylinders
Units = cylinders of 16065 * 4096 = 65802240 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 1 60801 3907018580 ee GPT
All Replies
-
Which model of disk are you current using on 2TB and 8TB?
If you have do the backup before, you can reset the device0 -
This data is generated at virtually the same moment? It's not consistent.
/proc/mdadm shows the array as degraded, and AFAIK that is what the webinterface uses for the status. But the members itself think they are a healthy happy array.
md0 : active raid1 sda2[2] sdb2[1]
1952996792 blocks super 1.2 [2/1] [_U]
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing)
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing)
Did you already reboot after rebuilding the array?
I made a mistake in my instructions,
mdadm --examine /dev/md0
should be
mdadm --detail /dev/md0
If a reboot doesn't work, can you give this info too?
0 -
Hello again and thanks for looking into this! In the meantime of the forum being transferred and inaccessible I decided after all to migrate to a Synology DS220j. So in the current condition I still have only one 2TB disk in the NSA325v2, since the 2x8TB are now in the Synology. Also, I am planning to decommission the NSA325v2 today, so I gave it another last try with the command you suggested, although only with one 2TB disk installed:mdadm --detail /dev/md0/dev/md0:
Version : 1.2
Creation Time : Tue Sep 30 06:27:56 2014
Raid Level : raid1
Array Size : 1952996792 (1862.52 GiB 1999.87 GB)
Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Thu Jan 21 09:06:30 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : NSA325-v2:0 (local to host NSA325-v2)
UUID : 4dccadb5:d1450afa:21a77618:e76638ad
Events : 2075073
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 2 1 active sync /dev/sda2Before this I tried several reboots as you suggested and I was able to build the 2x8TB array, but only from scratch, that means not with the 2TB installed, but by removing the 2TB disks completely and installing the 2x8TB blank disks. That worked indeed, but it defied the purpose of being able to upgrade from an existing environment. Since I would have to manually copy my files from the backup to the newly created 2x8TB array it was just the same to me to upgrade to a new NAS (OK, I had to spend some money, but the Zyxel was outdated anyway...)
0 -
Hello again and thanks for looking into this! In the meantime of the forum being transferred and inaccessible I decided after all to migrate to a Synology DS220j. So in the current condition I still have only one 2TB disk in the NSA325v2, since the 2x8TB are now in the Synology. Also, I am planning to decommission the NSA325v2 today, so I gave it another last try with the command you suggested, although only with one 2TB disk installed:mdadm --detail /dev/md0/dev/md0:
Version : 1.2
Creation Time : Tue Sep 30 06:27:56 2014
Raid Level : raid1
Array Size : 1952996792 (1862.52 GiB 1999.87 GB)
Used Dev Size : 1952996792 (1862.52 GiB 1999.87 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Thu Jan 21 09:06:30 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : NSA325-v2:0 (local to host NSA325-v2)
UUID : 4dccadb5:d1450afa:21a77618:e76638ad
Events : 2075073
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 2 1 active sync /dev/sda2Before this I tried several reboots as you suggested and I was able to build the 2x8TB array, but only from scratch, that means not with the 2TB installed, but by removing the 2TB disks completely and installing the 2x8TB blank disks. That worked indeed, but it defied the purpose of being able to upgrade from an existing environment. Since I would have to manually copy my files from the backup to the newly created 2x8TB array it was just the same to me to upgrade to a new NAS (OK, I had to spend some money, but the Zyxel was outdated anyway...)
0 -
This data is generated at virtually the same moment? It's not consistent.
/proc/mdadm shows the array as degraded, and AFAIK that is what the webinterface uses for the status. But the members itself think they are a healthy happy array.
md0 : active raid1 sda2[2] sdb2[1]
1952996792 blocks super 1.2 [2/1] [_U]
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing)
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing)
Did you already reboot after rebuilding the array?
I made a mistake in my instructions,
mdadm --examine /dev/md0
should be
mdadm --detail /dev/md0
If a reboot doesn't work, can you give this info too?
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 151 Nebula Ideas
- 98 Nebula Status and Incidents
- 5.7K Security
- 277 USG FLEX H Series
- 277 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.4K Consumer Product
- 250 Service & License
- 395 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 75 Security Highlight