I have the NSA 325 V2 and placed raid1 on disk1 while disk2 had not yet a volume
All Replies
-
The filesystem size (according to the superblock) is 244061232 blocks
The physical size of the device is 244061198 blocksAh, yes, of course. You copied the partition to a bigger disk, but the raid header still contains the old size. And so the ext filesystem is still to big.
So the array has to be resized to match the disksize.
<p>mdadm --grow /dev/md3</p><p><br></p><p>e2fsck.new /dev/md3</p><p><br></p><p>mount /dev/md3 /tmp/mountpoint</p><p></p>
0 -
Mijzelf:~ # mdadm --grow /dev/md3
mdadm: no changes to --grow
~ # e2fsck.new /dev/md3
e2fsck 1.41.14 (22-Dec-2010)
The filesystem size (according to the superblock) is 244061232 blocks
The physical size of the device is 244061198 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? no
/dev/md3 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md3: 93128/61022208 files (6.8% non-contiguous), 141480464/244061232 blocks
~ #The 2 signs before grow are --mount /dev/md3 /tmp/mountpoint
mount: mount point /tmp/mountpoint does not exist
0 -
~ # mdadm --grow /dev/md3
mdadm: no changes to --growmdadm --grow --size=max /dev/md3
0 -
Mijzelf: I can see progress:# mdadm --grow --size=max /dev/md3
mdadm: component size of /dev/md3 has been set to 976244928K
~ # e2fsck.new /dev/md3
e2fsck 1.41.14 (22-Dec-2010)
/dev/md3: clean, 93128/61022208 files, 141480464/244061232 blocks
~ # mount /dev/md3 /tmp/mountpoint
mount: mount point /tmp/mountpoint does not exist
0 -
then i tried:
mount /dev/md3 /mnt/mountpoint<br>~ #I looked at the GUI and saw this:
<br>
see attachment<br>
0 -
The files on md3 remain untouchable through the NAS however. Because the state of this md3 is unknown. I wonder if we do the same (grow action) with the original disk WD10EFRX who has the RAID1 and still is down, what will happen then.?
0 -
Looks good!. I suppose you can see your shares/subdirectories with
<div>ls -l /mnt/mountpoint/</div>
and the files in the share with<div>ls -l /mnt/mountpoint/<sharename>/</div><div></div>
The NAS itself can't see the files because it didn't mount the filesystem itself. But now you can copy the files over with 'cp -a', as descibed earlier in this thread.<quote>I wonder if we do the same (grow action) with the original disk WD10EFRX who has the RAID1 and still is down, what will happen then.? </quote>Good question. I tried to calculate it, but I get strange results.The output of mdadm --grow was<div>mdadm: component size of /dev/md3 has been set to 976244928K</div>
which is strange, as it's a 2TB disk. (It is, isn't it?). So I would have expected a size of 2TB here instead of 1TB.But that 976244928K is exactly the size of sda2 minus 2048 sectors, where the start of the filesystem is, according to 'mdadm --examine'.In other words, just try it. It won't hurt. The commands are the same, only use md0 instead of md3.
0 -
Mijzelf:I max sized md0......
mdadm: component size of /dev/md0 has been set to 976244928K
did the e2fscke2fsck 1.41.14 (22-Dec-2010)
/dev/md0: clean, 93128/61022208 files, 141480464/244061232 blocksI took out the external disk.looked at the GUI:see attachment.(Remark: i changed the name of the volume on disk1 to WD10EFRX)It seems the data isn't destroyed.But how to make the files visible again and the status of the disk1 is now Degraded.
0 -
how to make the files visible again
In the shares menu you should be able to re-activate the shares on that volume.
the status of the disk1 is now Degraded.That's just a name. It was a single disk linear array, now it's a single disk raid1 array. There is no difference in filesystem or redundancy, but because a raid1 array can have more redundancy, we now call it 'degraded'.
0 -
Mijzelf: Hartelijk dank voor je hulp de afgelopen weken.De bestanden staan nog op hun plaats....moet even nog iets met de foldernamen doen.Nu even de vraag hoe nu verder met de Raid1 configuratie.De 1Tb WD10EFRX staat nu in raid1 en de 2TB nog in JBOD.Echter de optie Migrate is niet beschkbaar.Ik ga nu even de bestanden veilig stellen door ze te copieren naar de externe disk die ik ga rerformatteren. En vandaar uit verder.GroetCarl0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 151 Nebula Ideas
- 98 Nebula Status and Incidents
- 5.7K Security
- 277 USG FLEX H Series
- 277 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 42 Wireless Ideas
- 6.4K Consumer Product
- 250 Service & License
- 395 News and Release
- 85 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.6K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 75 Security Highlight