NAS540 RAID5, replacing drive, no repair option??
RomanSantiago
Posts: 1
I have three NAS540s, all with four 2TB drives. Each NAS is configured as one big RAID5 volume. I've done single-drive replacements before and I've always been prompted to repair the volume. As I understand it, with RAID5 I should be able to upgrade all the drives one by one, repairing the volume again each time I swap out a drive, right?
But when I tried swapping a drive just now, I got the "volume degraded" warning but no repair option.
Could this be because the new drive came out of another NAS540? In fact, it even came out of the same slot (#3) in the other NAS, could that be confusing the firmware, like maybe the drive is "marked" as slot 3? (One of my units had two drives less than 6 months old, so I was going to move them to the other NAS as part of this upgrade process, they happen to match the brand/model.)
The Manage button on the Volume screen was grayed out and in addition to the Volume entry, it also listed drive 3 separately as "hot spare". Since the original very-old drive 3 hadn't failed, I reinstalled it and now that did prompt to repair and is re-syncing while I try to figure out what to do.
I have very basic Linux experience (long-time Windows dev) and I've enabled SSH and have puTTY installed and verified I can login. I've read some similar troubleshooting threads but don't know what I'm looking at, so I'm not sure what to do next.
But when I tried swapping a drive just now, I got the "volume degraded" warning but no repair option.
Could this be because the new drive came out of another NAS540? In fact, it even came out of the same slot (#3) in the other NAS, could that be confusing the firmware, like maybe the drive is "marked" as slot 3? (One of my units had two drives less than 6 months old, so I was going to move them to the other NAS as part of this upgrade process, they happen to match the brand/model.)
The Manage button on the Volume screen was grayed out and in addition to the Volume entry, it also listed drive 3 separately as "hot spare". Since the original very-old drive 3 hadn't failed, I reinstalled it and now that did prompt to repair and is re-syncing while I try to figure out what to do.
I have very basic Linux experience (long-time Windows dev) and I've enabled SSH and have puTTY installed and verified I can login. I've read some similar troubleshooting threads but don't know what I'm looking at, so I'm not sure what to do next.
#NAS_July_2020
0
Comments
-
Indeed the NAS won't use a disk which seems to belong to another raid array. You can wipe the partition table to solve that.In linux the 4 harddisks are called sda, sdb, sdc and sdd. By executingcat /proc/mdstat you can see which 3 disks are now in use by the raid array, so you know the name of the 'new' one. Assuming the disk is sdc (which I would expect for the 3th slot, but I wouldn't count on that), the command to wipe the first 32MiB issudd if=/dev/zero of=/dev/sdc bs=1M count=320
-
Dear SirWe found an article with the same topic and same contents as your postIf you have the same question, you can leave a comment and keep discussing on the original postYou don't have to copy all the contents and create a new thread for discussingThanksBest regards,Zyxel_Derrick0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 145 Nebula Ideas
- 94 Nebula Status and Incidents
- 5.6K Security
- 239 USG FLEX H Series
- 267 Security Ideas
- 1.4K Switch
- 71 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.3K Consumer Product
- 247 Service & License
- 384 News and Release
- 83 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.2K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 83 About Community
- 71 Security Highlight