NAS 540 Volume gone after latest update.

Options
24

All Replies

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    I don't think the array is rebuilding, as the headers didn't change. But you can see that in /proc/mdstat
    cat /proc/mdstat
    About the inability to login, I don't think that is related. There is no logon information stored on that volume. If the array is not rebuilding, you can first try to reboot the box:
    <div>su</div><div><br></div><div>reboot</div>

  • kimme
    kimme Posts: 35  Freshman Member
    First Anniversary
    Options
    After the reboot I can log back in so that's solved again.

    / $ cat /proc/mdstat 

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

    md1 : active raid1 sda2[4] sdd2[3] sdb2[5] sdc2[6]

          1998784 blocks super 1.2 [4/4] [UUUU]

          

    md0 : active raid1 sda1[4] sdd1[3] sdb1[5] sdc1[6]

          1997760 blocks super 1.2 [4/4] [UUUU]

          

    unused devices: <none>


    Should I try to assemble again now?

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Are there in the logs any traces why the array isn't assembled? I wonder if the disk sda has a hardware error which only shows up when accessing certain sectors. The arrays md0 and md1 both have a partition on sda as member, without problems.
    You can view the kernel log using
    dmesg
    Maybe a filter is helpful
    dmesg | grep sda

  • kimme
    kimme Posts: 35  Freshman Member
    First Anniversary
    Options

    $ dmesg | grep sda

    [   20.746660] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)

    [   20.754443] sd 0:0:0:0: [sda] 4096-byte physical blocks

    [   20.760218] sd 0:0:0:0: [sda] Write Protect is off

    [   20.765039] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00

    [   20.770639] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

    [   20.847325]  sda: sda1 sda2 sda3

    [   20.858269] sd 0:0:0:0: [sda] Attached SCSI disk

    [   34.988141] md: bind<sda1>

    [   35.057240] md: bind<sda2>


  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Answer ✓
    Options
    Nothing abnormal.

    Well, it won't hurt to assemble the array again, and if the firmware doesn't 'pick it up', you can also add sda3 manually:
    mdadm --manage /dev/md2 --add /dev/sda3
    The rebuilding will happen in background. You can see the status with
    cat /proc/mdstat

  • kimme
    kimme Posts: 35  Freshman Member
    First Anniversary
    Options
    2nd try assemble:

    ~ # mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run

    mdadm: /dev/md2 has been started with 3 drives (out of 4).


    Manually:

    ~ # mdadm --manage /dev/md2 --add /dev/sda3

    mdadm: added /dev/sda3

    ~ # cat /proc/mdstat

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

    md2 : active raid5 sda3[4] sdb3[1] sdd3[3] sdc3[2]

          5848151040 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]

          [>....................]  recovery =  0.0% (410832/1949383680) finish=1027.8min speed=31602K/sec

          

    md1 : active raid1 sda2[4] sdd2[3] sdb2[5] sdc2[6]

          1998784 blocks super 1.2 [4/4] [UUUU]

          

    md0 : active raid1 sda1[4] sdd1[3] sdb1[5] sdc1[6]

          1997760 blocks super 1.2 [4/4] [UUUU]

          

    unused devices: <none>



    So I guess it's doing it's recovery now. Let's hope this will be the solution.

    I assume 1 disk is dead than? As the exact disk isn't available anymore, would it hurt to put another disk in with the same specs?

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    I assume 1 disk is dead than?

    Don't know. The log showed I/O errors, which is bad. But it could also be a bad contact. Have a look at the SMART.

    As the exact disk isn't available anymore, would it hurt to put another disk in with the same specs?
    Any disk at the same size or bigger will do. Linux software raid isn't picky on anything. You might get worse performance if you choose a different disk, by timing problems, but actually I think the disk caches will cover anything.
  • kimme
    kimme Posts: 35  Freshman Member
    First Anniversary
    Options
    ok, thanks, I'll check tomorrow when the rebuild is finished but for as far as I know, all disks showed a green, healthy status.
  • kimme
    kimme Posts: 35  Freshman Member
    First Anniversary
    Options
    Mijzelf, you're the hero of the day! All the data is again where it should be! You've saved me more than 10 years of pictures/memories. 

    When I check the status of the disks I can't find any errors, the array is also healthy again.

    I don't know what caused it to be degraded tho. 

    PS: While browsing this forum I also saw you have a repository with apps for these NAS'es. I'd like to run Plex Server on it.
    Do you think it will run/transcode fine on the NAS540 or shouldn't I bother trying to install it?
  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    You've saved me more than 10 years of pictures/memories.
    Make sure you'll have a backup. The box will fail again, someday.
    Do you think it will run/transcode fine on the NAS540
    It won't transcode. But AFAIK it will run fine. (I have no real experience. Created the package on request, but except of some basic testing, I never used it. But the feedback was positive)

Consumer Product Help Center