How does the md array by the NAS 540/542 get mounted @ boot, and where is the md config file?
danielphoenix
Posts: 4 Freshman Member
How does the md array by the NAS 540/542 get mounted @ boot, and where is the md config file?
0
Best Answers
-
There is no config file.
On boot init calls /etc/init.d/rcS, which calls /bin/storage_asm_mntfw_swap.sh, which uses 'mdadm --examine' on all internal partitions to find, assemble and mount the raid array(s)0 -
Calling that python script is enough to get the partition mounted&shared?
If you install RandomTools, all scripts in /i-data/sysvol/.PKG/RandomTools/etc/custom_startscripts/ will be executed on boot. There you can call your python script. Would that be a solution for you?0
All Replies
-
There is no config file.
On boot init calls /etc/init.d/rcS, which calls /bin/storage_asm_mntfw_swap.sh, which uses 'mdadm --examine' on all internal partitions to find, assemble and mount the raid array(s)0 -
Thank's for your answer.
shouldn't "mdadm --examine" mount a manually created partition? because it doesn't, unless after the manual creation the python comand storage_main.MainCreateMdVol_VG... is run
without this command it does not mount.
Of corse I create md0 and md1 arrays aswell (none get mounted)0 -
I must admit that I don't know. The script /bin/storage_asm_mntfw_swap.sh is hard to read, it seems to me it examines all partitions, but it also looks at sizes.
As far as I know there is no central database. I can't find the UUID of my raid array anywhere in /etc/zyxel/, binary nor in ascii. /etc/zyxel/ is the mountpoint of a flash partition, and it's the only place where that database could be stored. On disk is not possible, if the mounting is dependent on the database.
Another option would be a flag on the filesystem, which would cause an unmount if the flag wouldn't be there, after mounting. But I can't find that flag either.
0 -
My NAS contains 4×6TB Drives in RAID5.
Setup had to be done manually because the UI does "mkpart extended 4096MB 100%" wich is for RAID5 over the 16TiB limit (100% replaced with 5868292MB).
I'm trying to do a setup like this, so I won't have ~530GB of wasted space
- md0 RAID1 2047MB sda1 sdb1 sdc1 sdd1 (FW)
- md1 RAID1 2048MB sda2 sdb2 sdc2 sdd2 (SWAP)
- md2 RAID5 17.6TB sda3 sdb3 sdc3 sdd3 (system,media,etc.)
- md3 RAID6 ~265GB sda4 sdb4 sdc4 sdd4 (cloud)
the whole setup works fine if done manually, and all the services are started manually, but it does not remount on restart.
- storage_asm_mntfw_swap.sh ignores drives with more than 3 partitions (can be fixed)
- the NAS remembers the setup and the indexes even after FW restored to factory... (index is probably taken from array name on disk, did not check yet)
- if an array created with "storage_main.MainCreateMdVol_VG" is changed with mdadm, changes revert after restart unles function is run again
I'm trying to decompile the storage_main.pyc to find out what it actually does (no success yet)
I will try to zero out a DISK, and check it with a hex editor
0 -
Calling that python script is enough to get the partition mounted&shared?
If you install RandomTools, all scripts in /i-data/sysvol/.PKG/RandomTools/etc/custom_startscripts/ will be executed on boot. There you can call your python script. Would that be a solution for you?0 -
the script needs to be called at array creation or modification, but it only works for single array setup (3partitions per disk) FW + SWAP + UsrSys
but maybe I can mount all arrays with a script at startup
Thanks for the info0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 151 Nebula Ideas
- 98 Nebula Status and Incidents
- 5.7K Security
- 277 USG FLEX H Series
- 277 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.4K Consumer Product
- 250 Service & License
- 395 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 75 Security Highlight