How does the md array by the NAS 540/542 get mounted @ boot, and where is the md config file?

danielphoenix
danielphoenix Posts: 4  Freshman Member
edited February 2018 in Personal Cloud Storage
How does the md array by the NAS 540/542 get mounted @ boot, and where is the md config file?

Best Answers

  • Mijzelf
    Mijzelf Posts: 2,002  Guru Member
    Answer ✓
    There is no config file. 
    On boot init calls /etc/init.d/rcS, which calls /bin/storage_asm_mntfw_swap.sh, which uses 'mdadm --examine' on all internal partitions to find, assemble and mount the raid array(s)
  • Mijzelf
    Mijzelf Posts: 2,002  Guru Member
    Answer ✓
    Calling that python script is enough to get the partition mounted&shared?

    If you install RandomTools, all scripts in /i-data/sysvol/.PKG/RandomTools/etc/custom_startscripts/ will be executed on boot. There you can call your python script. Would that be a solution for you?

All Replies

  • danielphoenix
    danielphoenix Posts: 4  Freshman Member
    edited February 2018
    Thank's for your answer.
    shouldn't "mdadm --examine" mount a manually created partition? because it doesn't, unless after the manual creation the python comand storage_main.MainCreateMdVol_VG... is run
    without this command it does not mount.
    Of corse I create md0 and md1 arrays aswell (none get mounted)
  • Mijzelf
    Mijzelf Posts: 2,002  Guru Member
    I must admit that I don't know. The script /bin/storage_asm_mntfw_swap.sh is hard to read, it seems to me it examines all partitions, but it also looks at sizes.

    As far as I know there is no central database. I can't find the UUID of my raid array anywhere in /etc/zyxel/, binary nor in ascii. /etc/zyxel/ is the mountpoint of a flash partition, and it's the only place where that database could be stored. On disk is not possible, if the mounting is dependent on the database.
    Another option would be a flag on the filesystem, which would cause an unmount if the flag wouldn't be there, after mounting. But I can't find that flag either.

  • danielphoenix
    danielphoenix Posts: 4  Freshman Member
    My NAS contains 4×6TB Drives in RAID5.
    Setup had to be done manually because the UI does "mkpart extended 4096MB 100%" wich is for RAID5 over the 16TiB limit (100% replaced with 5868292MB).

    I'm trying to do a setup like this, so I won't have ~530GB of wasted space
     - md0 RAID1 2047MB sda1 sdb1 sdc1 sdd1 (FW)
     - md1 RAID1 2048MB sda2 sdb2 sdc2 sdd2 (SWAP)
     - md2 RAID5 17.6TB sda3 sdb3 sdc3 sdd3 (system,media,etc.)
     - md3 RAID6 ~265GB sda4 sdb4 sdc4 sdd4 (cloud)
    the whole setup works fine if done manually, and all the services are started manually, but it does not remount on restart.
    - storage_asm_mntfw_swap.sh ignores drives with more than 3 partitions (can be fixed)
    - the NAS remembers the setup and the indexes even after FW restored to factory... (index is probably taken from array name on disk, did not check yet)
     - if an array created with "storage_main.MainCreateMdVol_VG" is changed with mdadm, changes revert after restart unles function is run again

    I'm trying to decompile the storage_main.pyc to find out what it actually does (no success yet)
    I will try to zero out a DISK, and check it with a hex editor
  • danielphoenix
    danielphoenix Posts: 4  Freshman Member
    the script needs to be called at array creation or modification, but it only works for single array setup (3partitions per disk) FW + SWAP + UsrSys

    but maybe I can mount all arrays with a script at startup

    Thanks for the info

Consumer Product Help Center