NAS 542 raid 5 volume down

Options
135

All Replies

  • BjoWis
    BjoWis Posts: 33  Freshman Member
    10 Comments Friend Collector
    Options
    A brief update of my progress - 37,76% completed
    [A     ipos:    1133 GB, non-trimmed:        0 B,  current rate:   2621 kB/s
         opos:    1133 GB, non-scraped:        0 B,  average rate:   1644 kB/s
    non-tried:    1867 GB,  bad-sector:        0 B,    error rate:       0 B/s
      rescued:    1133 GB,   bad areas:        0,        run time:  7d 23h 27m
    pct rescued:   37.76%, read errors:        0,  remaining time: 14d 18h 29m
                                  time since last successful read:          0s
    Copying non-tried blocks... Pass 1 (forwards)


  • Mijzelf
    Mijzelf Posts: 2,625  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Striking is that, although the speed is horrible (1.6MB/sec on average, while the disk should be able to do 160MB/sec), ddrescue still hasn't found any really unreadable sector.
  • BjoWis
    BjoWis Posts: 33  Freshman Member
    10 Comments Friend Collector
    Options
    Now at 84,2% and a few read errors found;

         ipos:    2527 GB, non-trimmed:   585728 B,  current rate:   2097 kB/s
         opos:    2527 GB, non-scraped:        0 B,  average rate:   1723 kB/s
    non-tried:  473997 MB,  bad-sector:        0 B,    error rate:       0 B/s
      rescued:    2526 GB,   bad areas:        0,        run time: 16d 23h 19m
    pct rescued:   84.20%, read errors:       14,  remaining time:  3d  4h 32m
                                  time since last successful read:          0s
    Copying non-tried blocks... Pass 1 (forwards)
  • BjoWis
    BjoWis Posts: 33  Freshman Member
    10 Comments Friend Collector
    Options
    @Mijzelf whats your thoughts?

    Now it is scraping failed blocks... and that seems to be very time consuming;
    100 kB took about 1 hour.

    What to do? Consider it done, or keep it going?
     
         ipos:    1502 GB, non-trimmed:        0 B,  current rate:     132 B/s
         opos:    1502 GB, non-scraped:   40887 kB,  average rate:   1598 kB/s
    non-tried:        0 B,  bad-sector:    1143 kB,    error rate:      16 B/s
      rescued:    3000 GB,   bad areas:     2233,        run time: 21d 17h 20m
    pct rescued:   99.99%, read errors:     3635,  remaining time:     16h 36m
                                  time since last successful read:          0s
    Trimming failed blocks... (forwards)
         ipos:    1500 GB, non-trimmed:        0 B,  current rate:       0 B/s
         opos:    1500 GB, non-scraped:   40787 kB,  average rate:   1595 kB/s
    non-tried:        0 B,  bad-sector:    1222 kB,    error rate:      18 B/s
      rescued:    3000 GB,   bad areas:     2235,        run time: 21d 18h 22m
    pct rescued:   99.99%, read errors:     3790,  remaining time:     59d 13m
                                  time since last successful read:      2m 55s
    Scraping failed blocks... (forwards)



  • Mijzelf
    Mijzelf Posts: 2,625  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    There are 40MB non-scraped sectors left. On a 2TB filesystem that is 0.002%. According to e2fsck
    /dev/md2: 97078/274325504 files (3.5% non-contiguous), 1096506963/2194601472 blocks
    around 50% of the disk is not even used. So statistically you loose around 20MB of data if you stop now. But there is no guarantee at all that that 20MB can be recovered if you continue. In theory, if you save the logfile, you can continue where you stopped. That's the purpose of the logfile. But shutting down the old disk might change things a bit.
    If your bitcoin wallet worth a million Euros is on that disk, I'd continue now. In most of the other cases I'd save the logfile in case there appears to be a major blocking filesystem error, and call it a day.

  • BjoWis
    BjoWis Posts: 33  Freshman Member
    10 Comments Friend Collector
    Options
    Haha, thanks for your input!

    I'll be away for a couple of days now, but what would be the next step?
    - Mount the new disk together with the other two working disks. And then reboot the NAS? Are there any specific commands etc that's of interest at this stage?
  • Mijzelf
    Mijzelf Posts: 2,625  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    what would be the next step?

    First put the 3 disks (the 2 old good ones, and the new copy) in the NAS and boot it. It won't hurt, and you'll never know.

    If the volume doesn't come up, you'll have to examine the raid headers, and rebuild the array correspondingly, as you did at the beginning of this thread.

  • BjoWis
    BjoWis Posts: 33  Freshman Member
    10 Comments Friend Collector
    Options
    I've inserted the 2 old good disks together with the new copy and rebooted the NAS. It starts beeping and when opening the WUI it says the RAID is crashed/Full.



    And when examining the status of the array;
    ~ # mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 3e046eeb:3bac7e38:2c4e2408:d05d7a1d

        Update Time : Mon Nov  7 09:55:23 2022
           Checksum : 602ee45 - correct
             Events : 37

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : f7c083fd:fe37f383:55424937:52ec4bd2

        Update Time : Mon Nov  7 09:55:23 2022
           Checksum : 4783b96b - correct
             Events : 37

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 555ccb7e:e9b29adc:2b39eea0:9329542f
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Wed Oct  5 13:17:25 2022
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405312 (8371.74 GiB 8989.09 GB)
      Used Dev Size : 5852270208 (2790.58 GiB 2996.36 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 803b4d20:57570076:2d7c62d1:e4bf567a

        Update Time : Mon Nov  7 09:55:23 2022
           Checksum : 9e7e60b7 - correct
             Events : 37

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AAA ('A' == active, '.' == missing)
    mdadm: cannot open /dev/sdd3: No such device or address

    and

    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid5 sdb3[1] sda3[3] sdc3[2]
          8778405312 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]
          
    md1 : active raid1 sdb2[6] sda2[4] sdc2[2]
          1998784 blocks super 1.2 [4/3] [U_UU]
          
    md0 : active raid1 sdc1[5] sdb1[4] sda1[6]
          1997760 blocks super 1.2 [4/3] [UUU_]
          
    unused devices: <none>

    Would the same three commands work now as well?
    su
    mdadm --stop /dev/md2
    mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 missing /dev/sdb3 /dev/sdd3 /dev/sdc3 && e2fsck /dev/md2




  • Mijzelf
    Mijzelf Posts: 2,625  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    According to /proc/mdstat the array is assembled and up. So recreating the array shouldn't do much. What happens if you just try to mount it?
  • BjoWis
    BjoWis Posts: 33  Freshman Member
    10 Comments Friend Collector
    Options
    Hmm... are we back at square 1?

    Tried to mount
    ~ # mkdir -p /tmp/mountpoint
    ~ # mount /dev/md2 /tmp/mountpoint
    mount: wrong fs type, bad option, bad superblock on /dev/md2,
           missing codepage or helper program, or other error

           In some cases useful info is found in syslog - try
           dmesg | tail or so.


    tried that command and looked at the last row

    [  533.041160] EXT4-fs (md2): bad geometry: block count 2194601472 exceeds size of device (2194601328 blocks)

    tried to resize (according to your previous comment)
    ~ # resize2fs /dev/md2
    resize2fs 1.42.12 (29-Aug-2014)
    The filesystem can be resize to 2194601328 blocks.chk_expansible=0

    Resizing the filesystem on /dev/md2 to 2194601328 (4k) blocks.
    resize2fs: Can't read a block bitmap while trying to resize /dev/md2
    Please run 'e2fsck -fy /dev/md2' to fix the filesystem
    after the aborted resize operation.

    and tried to run that command
    ~ # e2fsck -fy /dev/md2
    e2fsck 1.42.12 (29-Aug-2014)
    e2fsck: Attempt to read block from filesystem resulted in short read while trying to open /dev/md2
    Could this be a zero-length partition?



Consumer Product Help Center