NAS540, FW V5.21(AATB.13), "manage" option greyed out, can't edit RAID settings
hello!
I had two disks, 6TB each, configured as RAID1. after years of service, one changed it's S.M.A.R.T. status to "bad", so naturally - I've bought a spare one, exact same model to replace it.
because this NAS contains an years-long archive of work, and also is used on a daily basis, I was "too afraid" to just take out one disk so I've put the third one into NAS, and added it to RAID. I guess that was probably a mistake ;)
now, I'd like to remove the "bad" disk [it still works, but I just want to remove faulty one, and free it's bay to have two empty again [I'm planning second raid, so I need those two spare bays].
I thought I just go to the Storage Manager/Internal Storage/Volume click on the volume, click on the "Manage" option and degrade RAID to use just two disks but… the "Manage" option isn't available…
is it by design, or I did something wrong?
can I and if so - how do I get rid of the faulty disk and in the same time - "shrink" the RAID to just two disks, without risking any data loss?
Accepted Solution
-
I am inclined to trust command line tools rather than a gui wrapper. Especially when that gui uses fantasy namings. And it's not hard, you only have to read the documentation carefully. In most cases mdadm will only allow you to do stupid things if you add the —force command line switch.
In your case the biggest challenge is to find out which device node is used for your data raid array. Something chatgpt can't tell, I suppose. But 'cat /proc/mdstat' probably can. I think it's /dev/md2 in your case, but it can be something different depending on the history of that box.
If you have removed the faulty disk, the command
su mdadm /dev/md2 ++remove detached
should do the trick. Note that ++ should acutally be double minus, but this stupid forum software doesn't allow me to type that.
1
All Replies
-
I think it's by design. The webinterface just offers the most basic functions, among which no option to degrade an array. In that case the 'manager' pane is only selectable when an implemented manage option could be performed. But there is nothing to do with a 2 disk raid1. (Although you could convert it to a 3 disk raid1, but that is not supported either).
If you only want to degrade the array you can simply pull a disk.
0 -
I already have 3 disk RAID1- like I mentioned: instead just exchanging the faulty disk, I've added third and now it is in the RAID1 config.
but if I understand you correctly - I can just take out one disk and leave two, right? wouldn't then my RAID1 be with status "degraded"?
0 -
wouldn't then my RAID1 be with status "degraded"?
Correct. The headers say the raid array should have 3 members. So when only 2 members are found the array is degraded. Of course this is only cosmetic, the redundancy is exactly the same as a 'genuine' 2 disk raid1 array.
I'm afraid the only way to shrink that array to a 2 disk one is using the mdadm tool on the command line.
0 -
is it safe to play for an "advanced beginner"? ;) I mean - with not-so-perfect knowledge about linux, and at first-time use of the mdadm - I suppose I'm risking losing my whole RAID and saying goodbye to all the data?
I think I'll be using chatGPT for helping with the right syntax, but knowing it's not always right, I'll be walking on minefield, right? ;)
0 -
I am inclined to trust command line tools rather than a gui wrapper. Especially when that gui uses fantasy namings. And it's not hard, you only have to read the documentation carefully. In most cases mdadm will only allow you to do stupid things if you add the —force command line switch.
In your case the biggest challenge is to find out which device node is used for your data raid array. Something chatgpt can't tell, I suppose. But 'cat /proc/mdstat' probably can. I think it's /dev/md2 in your case, but it can be something different depending on the history of that box.
If you have removed the faulty disk, the command
su mdadm /dev/md2 ++remove detached
should do the trick. Note that ++ should acutally be double minus, but this stupid forum software doesn't allow me to type that.
1
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 144 Nebula Ideas
- 94 Nebula Status and Incidents
- 5.6K Security
- 237 USG FLEX H Series
- 267 Security Ideas
- 1.4K Switch
- 71 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.3K Consumer Product
- 247 Service & License
- 384 News and Release
- 83 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.2K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 83 About Community
- 71 Security Highlight