NAS540 Volume down ...need help

Options
tron7766
tron7766 Posts: 8  Freshman Member
edited June 2019 in Personal Cloud Storage
hi to all,
ich need help. I have a nas540 with a raid5 4x3 TByte. After a hdd crash and make hdd error, the nas degraded the raid5 volume. But then, i pull the wrong hdd , restore it an replace the error hdd with an new.
Then i restart the nas and it show me that the volume is down ?
Puh no entry to my data. What are the next steps?

Here is a screenshot with the mdstat
 cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : inactive sdd3[4](S) sda3[5](S)
      5852270592 blocks super 1.2

md1 : active raid1 sdd2[6] sda2[7] sdc2[4]
      1998784 blocks super 1.2 [4/3] [UU_U]

md0 : active raid1 sdd1[6] sda1[5] sdc1[4]
      1997760 blocks super 1.2 [4/3] [UUU_]


#NAS_Jun_2019
«1

Comments

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    But then, i pull the wrong hdd , restore it an replace the error hdd with an new.

    If you powered it up with only 2 disks, the degraded status is stored in the array headers. That will not automagically be repaired. AFAIK the only way to get the array running again is to re-create it.

    Can you post the output of

    <p>su</p><p><br></p><p>mdadm --examine /dev/sd[abcd]4</p><p></p>


  • tron7766
    tron7766 Posts: 8  Freshman Member
    Options
    This one...

    mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 07fd666b:a8e9b67a:b4149a5d:eeb3255e
               Name : nas540:2
      Creation Time : Tue Oct  6 18:56:14 2015
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405888 (8371.74 GiB 8989.09 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 270b9ca6:2c9e3e71:534c23e2:7a7b3a5b

        Update Time : Sun Jun  9 22:10:51 2019
           Checksum : 8ac939ee - correct
             Events : 19157

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : AA.A ('A' == active, '.' == missing)
    mdadm: cannot open /dev/sdb3: No such device or address
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 07fd666b:a8e9b67a:b4149a5d:eeb3255e
               Name : nas540:2
      Creation Time : Tue Oct  6 18:56:14 2015
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405888 (8371.74 GiB 8989.09 GB)
        Data Offset : 196608 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 7a1f2349:3361d24d:807f7d6a:90491796

        Update Time : Sun Jun  9 22:12:53 2019
           Checksum : cd151306 - correct
             Events : 19157

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : AA.. ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 07fd666b:a8e9b67a:b4149a5d:eeb3255e
               Name : nas540:2
      Creation Time : Tue Oct  6 18:56:14 2015
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778405888 (8371.74 GiB 8989.09 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 612e6ba5:3be4635c:18c0cdf2:f1004f3a

        Update Time : Sun Jun  9 22:12:53 2019
           Checksum : 647c9cec - correct
             Events : 19157

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 0
       Array State : AA.A ('A' == active, '.' == missing)

  • tron7766
    tron7766 Posts: 8  Freshman Member
    Options
    What is with sdb2 and sdb3?

    cat /proc/partitions
    major minor  #blocks  name

       7        0     147456 loop0
      31        0        256 mtdblock0
      31        1        512 mtdblock1
      31        2        256 mtdblock2
      31        3      10240 mtdblock3
      31        4      10240 mtdblock4
      31        5     112640 mtdblock5
      31        6      10240 mtdblock6
      31        7     112640 mtdblock7
      31        8       6144 mtdblock8
       8        0 2930266584 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3 2926266368 sda3
       8       16 2930266584 sdb
       8       17 2930265088 sdb1
       8       32 2930233816 sdc
       8       33    1998848 sdc1
       8       34    1999872 sdc2
       8       35 2926233600 sdc3
       8       48 2930266584 sdd
       8       49    1998848 sdd1
       8       50    1999872 sdd2
       8       51 2926266368 sdd3
      31        9     102424 mtdblock9
       9        0    1997760 md0
       9        1    1998784 md1
      31       10       4464 mtdblock10
  • tron7766
    tron7766 Posts: 8  Freshman Member
    Options
    and when put the ....

    mdadm --examine /dev/sd[abcd]1

    mdadm: No md superblock detected on /dev/sdb1.


    /dev/sda1:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 4475e358:25154506:7edcedde:c56c1f56
               Name : nas540:0
      Creation Time : Tue Oct  6 18:56:12 2015
         Raid Level : raid1
       Raid Devices : 4

     Avail Dev Size : 3995648 (1951.33 MiB 2045.77 MB)
         Array Size : 1997760 (1951.27 MiB 2045.71 MB)
      Used Dev Size : 3995520 (1951.27 MiB 2045.71 MB)
        Data Offset : 2048 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 6401fac5:89f8495d:d231f77b:797c06c9

        Update Time : Mon Jun 10 09:18:07 2019
           Checksum : be9e7515 - correct
             Events : 828


       Device Role : Active device 2
       Array State : AAA. ('A' == active, '.' == missing)
    mdadm: No md superblock detected on /dev/sdb1.
    /dev/sdc1:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 4475e358:25154506:7edcedde:c56c1f56
               Name : nas540:0
      Creation Time : Tue Oct  6 18:56:12 2015
         Raid Level : raid1
       Raid Devices : 4

     Avail Dev Size : 3995648 (1951.33 MiB 2045.77 MB)
         Array Size : 1997760 (1951.27 MiB 2045.71 MB)
      Used Dev Size : 3995520 (1951.27 MiB 2045.71 MB)
        Data Offset : 2048 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : d440aebc:81d1648b:f065fdad:e32fbe1a

        Update Time : Mon Jun 10 09:18:07 2019
           Checksum : 672b7504 - correct
             Events : 828


       Device Role : Active device 1
       Array State : AAA. ('A' == active, '.' == missing)
    /dev/sdd1:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 4475e358:25154506:7edcedde:c56c1f56
               Name : nas540:0
      Creation Time : Tue Oct  6 18:56:12 2015
         Raid Level : raid1
       Raid Devices : 4

     Avail Dev Size : 3995648 (1951.33 MiB 2045.77 MB)
         Array Size : 1997760 (1951.27 MiB 2045.71 MB)
      Used Dev Size : 3995520 (1951.27 MiB 2045.71 MB)
        Data Offset : 2048 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 28d0324e:0dbc342d:02eb5d75:968bda21

        Update Time : Mon Jun 10 09:18:07 2019
           Checksum : 68fccfaa - correct
             Events : 828


       Device Role : Active device 0
       Array State : AAA. ('A' == active, '.' == missing)
    ~ # mdadm: No md superblock detected on /dev/sdb1.



        

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Don't know how you managed to get that. sda3 and sdb3 agree they are part of a degraded array (Array State : AA.A ('A' == active, '.' == missing)), but sdc3 thinks it's down. (Array State : AA.. ('A' == active, '.' == missing)). I can't think of a scenario to get that.

    Anyway, seeing the roles of the different partitions, the command to re-create the array is

    mdadm --stop /dev/md2<br><div>mdadm --create --assume-clean --level=5&nbsp; --raid-devices=4 --metadata=1.2 --chunk=64K&nbsp; --layout=left-symmetric /dev/md2 /dev/sdd3 /dev/sdc3 missing /dev/sda3</div>

    What is with sdb2 and sdb3?

    sdb is your new disk, and it appears to have a single, disk spanning partition. That will be changed if you add it to the array.

  • tron7766
    tron7766 Posts: 8  Freshman Member
    Options
    hi, time for the nas....
    good news...md2 is aktive
    bad news volume down in the gui and the beeper is on.
    cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid5 sda3[0] sdd3[3] sdc3[2]
          8778307008 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [U_UU]
    
    md1 : active raid1 sdd2[6] sda2[7] sdc2[4]
          1998784 blocks super 1.2 [4/3] [UU_U]
    
    md0 : active raid1 sdd1[6] sda1[5] sdc1[4]
          1997760 blocks super 1.2 [4/3] [UUU_]
    
    the array don´t want to rebuilding...


    mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1c
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Wed Jun 12 15:37:28 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778307008 (8371.65 GiB 8988.99 GB)
      Used Dev Size : 5852204672 (2790.55 GiB 2996.33 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : d74878a0:25ea4764:cb75619f:72746b15

        Update Time : Wed Jun 12 16:41:22 2019
           Checksum : edca6223 - correct
             Events : 6

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 0
       Array State : A.AA ('A' == active, '.' == missing)
    mdadm: cannot open /dev/sdb3: No such device or address
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1c
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Wed Jun 12 15:37:28 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852205056 (2790.55 GiB 2996.33 GB)
         Array Size : 8778307008 (8371.65 GiB 8988.99 GB)
      Used Dev Size : 5852204672 (2790.55 GiB 2996.33 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 1f2d5346:85f27cb9:cbe98656:69fb864d

        Update Time : Wed Jun 12 16:41:22 2019
           Checksum : d81a49c4 - correct
             Events : 6

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : A.AA ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1c
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Wed Jun 12 15:37:28 2019
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 5852270592 (2790.58 GiB 2996.36 GB)
         Array Size : 8778307008 (8371.65 GiB 8988.99 GB)
      Used Dev Size : 5852204672 (2790.55 GiB 2996.33 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : b24fd27c:124bef01:014c7c14:d3878b60

        Update Time : Wed Jun 12 16:41:22 2019
           Checksum : 2806b385 - correct
             Events : 6

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : A.AA ('A' == active, '.' == missing)

     mdadm --deatail /dev/md2
    mdadm: unrecognized option '--deatail'
    Usage: mdadm --help
      for help
    ~ # mdadm --detail /dev/md2
    /dev/md2:
            Version : 1.2
      Creation Time : Wed Jun 12 15:37:28 2019
         Raid Level : raid5
         Array Size : 8778307008 (8371.65 GiB 8988.99 GB)
      Used Dev Size : 2926102336 (2790.55 GiB 2996.33 GB)
       Raid Devices : 4
      Total Devices : 3
        Persistence : Superblock is persistent

        Update Time : Wed Jun 12 16:41:22 2019
              State : clean, degraded
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0

             Layout : left-symmetric
         Chunk Size : 64K

               Name : NAS540:2  (local to host NAS540)
               UUID : 0d8a619a:97ce0384:ee3a5475:2aa65d1c
             Events : 6

        Number   Major   Minor   RaidDevice State
           0       8        3        0      active sync   /dev/sda3
           1       0        0        1      removed
           2       8       35        2      active sync   /dev/sdc3
           3       8       51        3      active sync   /dev/sdd3


  • tron7766
    tron7766 Posts: 8  Freshman Member
    Options
    then it goes on..
     mdadm --stop /dev/md2
    mdadm: stopped /dev/md2

    mdadm --assemble --force /dev/md2 /dev/sd[abcd]3
    mdadm: cannot open device /dev/sdb3: No such device or address
    mdadm: /dev/sdb3 has no superblock - assembly aborted

    I dont now want with sdb3...
     sha2_512 sha256_hmac null
    [   31.108529] egiga0: no IPv6 routers present
    [   39.747678] ADDRCONF(NETDEV_CHANGE): egiga1: link becomes ready
    [   42.358836] md: md2 stopped.
    [   42.410376] md: bind<sdc3>
    [   42.413416] md: bind<sdd3>
    [   42.416516] md: bind<sda3>
    [   42.423346] md/raid:md2: device sda3 operational as raid disk 0
    [   42.429309] md/raid:md2: device sdd3 operational as raid disk 3
    [   42.435245] md/raid:md2: device sdc3 operational as raid disk 2
    [   42.442242] md/raid:md2: allocated 4220kB
    [   42.446350] md/raid:md2: raid level 5 active with 3 out of 4 devices, algorit                                                                                                             hm 2
    [   42.453872] RAID conf printout:
    [   42.453878]  --- level:5 rd:4 wd:3
    [   42.453885]  disk 0, o:1, dev:sda3
    [   42.453891]  disk 2, o:1, dev:sdc3
    [   42.453897]  disk 3, o:1, dev:sdd3
    [   42.453999] md2: detected capacity change from 0 to 8988986376192
    [   42.681557]  md2: unknown partition table
    [   43.229025] EXT4-fs (md2): Couldn't mount because of unsupported optional fea                                                                                                             tures (4000000)
    [   70.551156] bz time = 1f
    [   70.553704] bz status = 3
    [   70.556329] bz_timer_status = 0
    [   70.559513] start buzzer
    [   73.663087] bz time = 1
    [   73.666651] bz status = 1
    [   73.670355] bz_timer_status = 1
    [   74.804812] bz time = 0
    [   74.807271] bz status = 0
    [   74.809914] bz_timer_status = 1
    [  161.742082] md2: detected capacity change from 8988986376192 to 0
    [  161.748204] md: md2 stopped.
    [  161.751121] md: unbind<sda3>
    [  161.788549] md: export_rdev(sda3)
    [  161.791898] md: unbind<sdd3>
    [  161.828544] md: export_rdev(sdd3)
    [  161.831892] md: unbind<sdc3>
    [  161.868579] md: export_rdev(sdc3)
    [  314.814732] md: bind<sda3>
    [  314.817667] md: bind<sdc3>
    [  314.820618] md: bind<sdd3>
    [  314.827481] md/raid:md2: device sdd3 operational as raid disk 3
    [  314.833468] md/raid:md2: device sdc3 operational as raid disk 2
    [  314.839417] md/raid:md2: device sda3 operational as raid disk 0
    [  314.846385] md/raid:md2: allocated 4220kB
    [  314.850496] md/raid:md2: raid level 5 active with 3 out of 4 devices, algorit                                                                                                             hm 2
    [  314.857998] RAID conf printout:
    [  314.858003]  --- level:5 rd:4 wd:3
    [  314.858010]  disk 0, o:1, dev:sda3
    [  314.858017]  disk 2, o:1, dev:sdc3
    [  314.858023]  disk 3, o:1, dev:sdd3
    [  314.858119] md2: detected capacity change from 0 to 8988986376192
    [  314.865839]  md2: unknown partition table
    [  975.865301] md2: detected capacity change from 8988986376192 to 0
    [  975.871490] md: md2 stopped.
    [  975.874401] md: unbind<sdd3>
    [  975.908638] md: export_rdev(sdd3)
    [  975.911994] md: unbind<sdc3>
    [  975.968580] md: export_rdev(sdc3)
    [  975.971938] md: unbind<sda3>
    [  975.988582] md: export_rdev(sda3)
    ~ #
    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md1 : active raid1 sdd2[6] sda2[7] sdc2[4]
          1998784 blocks super 1.2 [4/3] [UU_U]

    md0 : active raid1 sdd1[6] sda1[5] sdc1[4]
          1997760 blocks super 1.2 [4/3] [UUU_]

    unused devices: <none>

  • Mijzelf
    Mijzelf Posts: 2,613  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    June 10
    /dev/sda1:
    <snip>
       Device Role : Active device 2
    /dev/sdc1:
       Device Role : Active device 1
    /dev/sdd1:
       Device Role : Active device 0

    June 12:
    /dev/sda3:
       Device Role : Active device 0
    /dev/sdc3:
       Device Role : Active device 2
    /dev/sdd3:
       Device Role : Active device 3

    You didn't use the create exactly as I specified, or you re-shuffled the disks, after creation. I specified on base of your dump on June 10: '/dev/sdd3 /dev/sdc3 missing /dev/sda3', but instead it seems '/dev/sda3 missing /dev/sdc3 /dev/sdd3' was used.

    BTW, now I see I made a mistake, it should be '/dev/sdd3 /dev/sdc3 /dev/sda3 missing'.

  • tron7766
    tron7766 Posts: 8  Freshman Member
    Options
    News from the nas. With
    mdadm --create....
    the nas didn´t start a recovery. The Partition sdb1..3 were los.
    I make with dd a copy from sda partition table to sdb. Then manually add the sdb1..3 partition to the md0..md2
    The nas begin to recovery. The gui shows it too. The info ist volume is down, but I didn´t see any data? 
    dmseg shows...

    [   41.983404] md: recovery of RAID array md2
    [   41.987553] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
    [   41.993403] md: using maximum available idle IO bandwidth (but not more than                                                                                                              200000 KB/sec) for recovery.
    [   42.003046] md: using 128k window, over a total of 2926102336k.
    [   42.009024] md: resuming recovery of md2 from checkpoint.
    [   42.433866]  md2: unknown partition table
    [   42.953680] EXT4-fs (md2): bad geometry: block count 2194601472 exceeds size                                                                                                              of device (2194576752 blocks)
    [   95.978898] bz time = 1
    [   95.981358] bz status = 1
    [   95.983985] bz_timer_status = 0
    [   95.987193] start buzzer
    [  165.300990] bz time = 0
    [  165.303450] bz status = 0
    [  165.306077] bz_timer_status = 1
    [  580.050138] UBI error: ubi_open_volume: cannot open device 5, volume 0, error                                                                                                              -16
    [  580.058887] UBI error: ubi_open_volume: cannot open device 5, volume 0, error                                                                                                              -16
    [  580.095206] UBI error: ubi_open_volume: cannot open device 3, volume 0, error                                                                                                              -16
    [  580.121457] UBI error: ubi_open_volume: cannot open device 3, volume 0, error                                                                                                              -16
    [  605.555375] UBI error: ubi_open_volume: cannot open device 5, volume 0, error                                                                                                              -16
    [  605.563095] UBI error: ubi_open_volume: cannot open device 5, volume 0, error                                                                                                              -16
    [  605.572758] UBI error: ubi_open_volume: cannot open device 3, volume 0, error                                                                                                              -16
    [  605.580340] UBI error: ubi_open_volume: cannot open device 3, volume 0, error        

    fdisk says, bad partition table backup at sdb and work with primary table...

    any chance to get the data back? 


  • tron7766
    tron7766 Posts: 8  Freshman Member
    Options
    md2 bad geometry....
    with...e2fsck -f /dev/XXX resize2fs /dev/XXX it goes right..
    but the data ?

Consumer Product Help Center