I have the NSA 325 V2 and placed raid1 on disk1 while disk2 had not yet a volume

12346»

All Replies

  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    The filesystem size (according to the superblock) is 244061232 blocks
    The physical size of the device is 244061198 blocks

    Ah, yes, of course. You copied the partition to a bigger disk, but the raid header still contains the old size. And so the ext filesystem is still to big.

    So the array has to be resized to match the disksize.

    <p>mdadm --grow /dev/md3</p><p><br></p><p>e2fsck.new /dev/md3</p><p><br></p><p>mount /dev/md3 /tmp/mountpoint</p><p></p>


  • Carlusha99
    Carlusha99 Posts: 39  Freshman Member
    Mijzelf:

    ~ # mdadm --grow /dev/md3
    mdadm: no changes to --grow
    ~ # e2fsck.new /dev/md3
    e2fsck 1.41.14 (22-Dec-2010)
    The filesystem size (according to the superblock) is 244061232 blocks
    The physical size of the device is 244061198 blocks
    Either the superblock or the partition table is likely to be corrupt!
    Abort<y>? no

    /dev/md3 contains a file system with errors, check forced.
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    /dev/md3: 93128/61022208 files (6.8% non-contiguous), 141480464/244061232 blocks
    ~ #
    The 2 signs before grow are --

     mount /dev/md3 /tmp/mountpoint
    mount: mount point /tmp/mountpoint does not exist





  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    ~ # mdadm --grow /dev/md3
    mdadm: no changes to --grow

    mdadm --grow --size=max /dev/md3

  • Carlusha99
    Carlusha99 Posts: 39  Freshman Member
    Mijzelf: I can see progress:

     # mdadm --grow --size=max /dev/md3
    mdadm: component size of /dev/md3 has been set to 976244928K
    ~ # e2fsck.new /dev/md3
    e2fsck 1.41.14 (22-Dec-2010)
    /dev/md3: clean, 93128/61022208 files, 141480464/244061232 blocks
    ~ # mount /dev/md3 /tmp/mountpoint
    mount: mount point /tmp/mountpoint does not exist



  • Carlusha99
    Carlusha99 Posts: 39  Freshman Member
    then i tried:


    mount /dev/md3 /mnt/mountpoint<br>~ #
    I looked at the GUI and saw this:
    <br>
    see attachment<br>
  • Carlusha99
    Carlusha99 Posts: 39  Freshman Member
    The files on md3 remain untouchable through the NAS however. Because the state of this md3 is unknown. I wonder if we do the same (grow action) with the original disk WD10EFRX who has the RAID1 and still is down, what will happen then.?
  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Looks good!. I suppose you can see your shares/subdirectories with
    <div>ls -l /mnt/mountpoint/</div>
    and the files in the share with
    <div>ls -l /mnt/mountpoint/<sharename>/</div><div></div>
    The NAS itself can't see the files because it didn't mount the filesystem itself. But now you can copy the files over with 'cp -a', as descibed earlier in this thread.

    <quote>I wonder if we do the same (grow action) with the original disk WD10EFRX who has the RAID1 and still is down, what will happen then.? </quote>

    Good question. I tried to calculate it, but I get strange results.
    The output of mdadm --grow was
    <div>mdadm: component size of /dev/md3 has been set to 976244928K</div>
    which is strange, as it's a 2TB disk. (It is, isn't it?). So I would have expected a size of 2TB here instead of 1TB.
    But that 976244928K is exactly the size of sda2 minus 2048 sectors, where the start of the filesystem is, according to 'mdadm --examine'.

    In other words, just try it. It won't hurt. The commands are the same, only use md0 instead of md3.



  • Carlusha99
    Carlusha99 Posts: 39  Freshman Member
    Mijzelf:

    I max sized md0......
    mdadm: component size of /dev/md0 has been set to 976244928K
    did the e2fsck
    e2fsck 1.41.14 (22-Dec-2010)
    /dev/md0: clean, 93128/61022208 files, 141480464/244061232 blocks

    I took out the external disk.

    looked at the GUI:see attachment.
    (Remark: i changed the name of the volume on disk1 to WD10EFRX)
    It seems the data isn't destroyed.
    But how to make the files visible again and the status of the disk1 is now Degraded.








  • Mijzelf
    Mijzelf Posts: 2,598  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    how to make the files visible again

    In the shares menu you should be able to re-activate the shares on that volume.

    the status of the disk1 is now Degraded.

    That's just a name. It was a single disk linear array, now it's a single disk raid1 array. There is no difference in filesystem or redundancy, but because a raid1 array can have more redundancy, we now call it 'degraded'.



  • Carlusha99
    Carlusha99 Posts: 39  Freshman Member
    Answer ✓
    Mijzelf: Hartelijk dank voor je hulp de afgelopen weken.
    De bestanden staan nog op hun plaats....moet even nog iets met de foldernamen doen.

    Nu even de vraag hoe nu verder met de Raid1 configuratie.
    De 1Tb WD10EFRX staat nu in raid1 en de 2TB nog in JBOD.
    Echter de optie Migrate is niet beschkbaar.

    Ik ga nu even de bestanden veilig stellen door ze te copieren naar de externe disk die ik ga rerformatteren. En vandaar uit verder.

    Groet

    Carl

Consumer Product Help Center