NAS540: Volume down, repairing failed, how to restore data?

Options
24567

All Replies

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    is there a possibility to copy specific folders (in video, music, photo) to an external HDD via USB3 port using telnet/ssh command (not the GUI)?
    Sure. By default the usb3 disk is mounted on /e-data/<some-long-uuid>/. For conveniance, you can create a symlink
    ln -s /usbdisk /e-data/<some-long-uuid>
    (this will not survive a reboot)

    If the firmware mounted your array, it's mounted on /i-data/<some-hex-code>, and probably /i-data/sysvol is an (indirect) symlink to it. The shares are in the root of the array.

    If you want to copy a subdirectory, the command can be
    cd /i-data/<some-hex-code>/Video<br>cp -a MySubdir /usbdisk/
    If you want it verbose
    cp -av MySubdir /usbdisk/



  • Florian
    Florian Posts: 7  Freshman Member
    Options
    Thanks again, Mijzelf!
    Let me know when you are in Berlin, then I will invite you for a beer :)
    Florian
  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Hi guys,

    I'm facing a similar issue: my drive (also md2) went down and I got it replaced with a new one 

    <div>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]</div><div>md3 : active raid1 sdc3[0] sdd3[1]</div><div>&nbsp; &nbsp; &nbsp; 1949383488 blocks super 1.2 [2/2] [UU]</div><div><br></div><div>md2 : inactive sdb3[2](S)</div><div>&nbsp; &nbsp; &nbsp; 1949383680 blocks super 1.2</div><div><br></div><div>md1 : active raid1 sdb2[4] sdc2[5] sdd2[6]</div><div>&nbsp; &nbsp; &nbsp; 1998784 blocks super 1.2 [4/3] [U_UU]</div><div><br></div><div>md0 : active raid1 sdb1[4] sdc1[5] sdd1[6]</div><div>&nbsp; &nbsp; &nbsp; 1997760 blocks super 1.2 [4/3] [U_UU]</div><div><br></div><div>unused devices: <none></div>
    my mdadm.conf returns:

    <div>ARRAY /dev/md0 level=raid1 num-devices=4 metadata=1.2 name=NAS540:0 UUID=60e20528:5ff04d12:9729d455:3fdd0b58</div><div>&nbsp; &nbsp;devices=/dev/sdb1,/dev/sdc1,/dev/sdd1</div><div>ARRAY /dev/md1 level=raid1 num-devices=4 metadata=1.2 name=NAS540:1 UUID=069aed49:2bf44e5c:b42db67f:82d34ecc</div><div>&nbsp; &nbsp;devices=/dev/sdb2,/dev/sdc2,/dev/sdd2</div><div>ARRAY /dev/md3 level=raid1 num-devices=2 metadata=1.2 name=XXX:3 UUID=da1a17ea:f2a26d21:d35d0f62:4daa646d</div><div>&nbsp; &nbsp;devices=/dev/sdc3,/dev/sdd3
    </div>

    and for md2 on scanning, I get the following:

    <div>md device /dev/md2 does not appear to be active</div><div></div><b></b>

    Take a look at md3  - it looks strange as all these names (md0, md1, md3) should have been identical I guess. 

    Could you please write in steps what should I do? All the shares from the faulty raid matrix (the one which contains md2) are drive are unavailable. I haven't tried anything so far on these drives to preserve my files. 

    Below I'm pasting my web interface screens perhaps it will make it easier to come up with a successful solution?











    Thanks in advance,

    BT


  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Can you post the output of
    <div>su</div><div><br></div><div>mdadm --examine /dev/sd[abcd]3</div>

  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Hi Mijzelf,

    it looks as follows:

    <div>mdadm: cannot open /dev/sda3: No such device or address</div><div>/dev/sdb3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x2</div><div>&nbsp; &nbsp; &nbsp;Array UUID : d69d46c2:8e90e04e:a2188060:2c230b03</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : NAS540:2</div><div>&nbsp; Creation Time : Fri Jul 17 21:09:32 2015</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid1</div><div>&nbsp; &nbsp;Raid Devices : 2</div><div><br></div><div>&nbsp;Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 1949383680 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>Recovery Offset : 731019264 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : 8e602567:8efc09d7:f3a620a9:02f935e5</div><div><br></div><div>&nbsp; &nbsp; Update Time : Sat Mar 16 09:39:47 2019</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 409f1b - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 1470</div><div><br></div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 1</div><div>&nbsp; &nbsp;Array State : AA ('A' == active, '.' == missing)</div><div>/dev/sdc3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : da1a17ea:f2a26d21:d35d0f62:4daa646d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : XXX:3&nbsp; (local to host XXX)</div><div>&nbsp; Creation Time : Mon May&nbsp; 9 17:11:30 2016</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid1</div><div>&nbsp; &nbsp;Raid Devices : 2</div><div><br></div><div>&nbsp;Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 1949383488 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : ca63fccc:a5346ad6:6c4a09ed:be7f9a64</div><div><br></div><div>&nbsp; &nbsp; Update Time : Wed May 29 23:59:27 2019</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 8a5fa17b - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 2</div><div><br></div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 0</div><div>&nbsp; &nbsp;Array State : AA ('A' == active, '.' == missing)</div><div>/dev/sdd3:</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Magic : a92b4efc</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Version : 1.2</div><div>&nbsp; &nbsp; Feature Map : 0x0</div><div>&nbsp; &nbsp; &nbsp;Array UUID : da1a17ea:f2a26d21:d35d0f62:4daa646d</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Name : XXX:3&nbsp; (local to host XXX)</div><div>&nbsp; Creation Time : Mon May&nbsp; 9 17:11:30 2016</div><div>&nbsp; &nbsp; &nbsp;Raid Level : raid1</div><div>&nbsp; &nbsp;Raid Devices : 2</div><div><br></div><div>&nbsp;Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; &nbsp;Array Size : 1949383488 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)</div><div>&nbsp; &nbsp; Data Offset : 262144 sectors</div><div>&nbsp; &nbsp;Super Offset : 8 sectors</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; State : clean</div><div>&nbsp; &nbsp; Device UUID : 201d9c23:684df915:3de56760:e8bcfcfe</div><div><br></div><div>&nbsp; &nbsp; Update Time : Wed May 29 23:59:27 2019</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Checksum : 2e4f4b8f - correct</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Events : 2</div><div><br></div><div><br></div><div>&nbsp; &nbsp;Device Role : Active device 1</div><div>&nbsp; &nbsp;Array State : AA ('A' == active, '.' == missing)<br></div>
    Thanks,

    BT

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    I see no reason why md2 isn't assembled. Can you try it manually?
    mdadm --assemble /dev/md2 /dev/sdb3 --force
    BTW, you posted a lot more information than I saw on first sight, but this *** forum software sometimes removes line ends. So if you copy&paste data, have a look in preview first, and add empty lines if its borked up.
  • basetron
    basetron Posts: 13  Freshman Member
    Options
    Hi  Mijzelf,

    initially, I was getting a reply that sdb3 is busy

    <div>~ # mdadm --assemble /dev/md2 /dev/sdb3 --force<br><br></div><div>mdadm: /dev/sdb3 is busy - skipping</div><div></div>

    so I stopped md2

    <div>~ # mdadm --stop /dev/md2<br><br></div><div>mdadm: stopped /dev/md2</div>

    I get that there are no drives:
    mdadm: /dev/md2 assembled from 0 drives and&nbsp; 1 rebuilding - not enough to start the array.

    There's something wrong.

    As for the software I get a very different preview of what's finally published. I'll do my best to mind the lines.

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Right. Somehow the raidmanager thinks he's rebuilding the array. With only a single disk.

    I don't know a way out, other than creating a new array around the existing filesystem. Stop /dev/md2, if necessary, and execute
    mdadm --create --assume-clean --level=1 --raid-devices=2 --metadata=1.2 /dev/md2 missing /dev/sdb3<br>
    This will build a 2 disk degraded raid1 array around the existing filesystem.

    If you don't feel comfortable about that, you can mount the internal filesystem manually by using a loopdevice. According to the header the dataoffset in the array is 262144 sectors. So if you create a loopdevice on /dev/sdb3 with that offset, it should be mountable.
    <div>losetup -o 134217728 /dev/loop1 /dev/sdb2</div><div><br></div><div>mkdir -p /mnt/mountpoint</div><div><br></div><div>mount /dev/loop1 /mnt/mountpoint</div><div><br></div>

    Now you can copy your precious files from /mnt/mountpoint to the other array, or to an usb disk.


  • basetron
    basetron Posts: 13  Freshman Member
    Options
    I guess that now I do have something to worry about: the NAS keeps beeping and all the shares vanished.

    I guess that there's nothing left but to try to recover the files I guess. I'll try to mount the drives to another pc running linux and recover the files :(  

    Thank you, anyway.  
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Did you re-create the array? It can be normal the NAS is beeping. You have a degraded array.

    It's also normal the shares on the new array are vanished. The shares are known by their mountpoint, and the mountpoint is dictated by the UUID of the array, which is new.

    Have a look in the shares menu if you just can enable them.

Consumer Product Help Center