NAS 540 Volume gone after latest update.
I just updated my NAS540 to the latest firmware.
Is there any way to restore this? Otherwise I'm loosing 5TB of data!
I hope there's a solution for this!
Kim
#NAS_Aug
Accepted Solution
-
Nothing abnormal.Well, it won't hurt to assemble the array again, and if the firmware doesn't 'pick it up', you can also add sda3 manually:
mdadm --manage /dev/md2 --add /dev/sda3
The rebuilding will happen in background. You can see the status withcat /proc/mdstat
0
All Replies
-
What kind of volume is that? (Single disk, RAID1, RAID5)
Do you reboot the box regularly, or was this the first reboot in a long time?
Can you enable the ssh server (Control Panel->Network->Terminal) login (you can use PuTTY for that), and post the output ofcat /proc/partitions<br>cat /proc/mdstat
0 -
Hi,
Thanks for the reply!
It's a RAID5 setup with 4 disks.
Never used ssh so I'd might need some help here. (Using a mac)0 -
MAC has the terminal tool embedded, so you can the tool by searching "terminal".
Then type "ssh nas_ip" or "telnet nas_ip" to access your NAS540.
Login info is the same as admin/password.0 -
don't know if this is what you need but here we go
/ $ cat /proc/partitions
major minor #blocks name
7 0 147456 loop0
31 0 256 mtdblock0
31 1 512 mtdblock1
31 2 256 mtdblock2
31 3 10240 mtdblock3
31 4 10240 mtdblock4
31 5 112640 mtdblock5
31 6 10240 mtdblock6
31 7 112640 mtdblock7
31 8 6144 mtdblock8
8 0 1953514584 sda
8 1 1998848 sda1
8 2 1999872 sda2
8 3 1949514752 sda3
8 16 1953514584 sdb
8 17 1998848 sdb1
8 18 1999872 sdb2
8 19 1949514752 sdb3
8 32 1953514584 sdc
8 33 1998848 sdc1
8 34 1999872 sdc2
8 35 1949514752 sdc3
8 48 1953514584 sdd
8 49 1998848 sdd1
8 50 1999872 sdd2
8 51 1949514752 sdd3
31 9 102424 mtdblock9
9 0 1997760 md0
9 1 1998784 md1
31 10 4464 mtdblock10
/ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sda2[4] sdd2[3] sdb2[5] sdc2[6]
1998784 blocks super 1.2 [4/4] [UUUU]
md0 : active raid1 sda1[4] sdd1[3] sdb1[5] sdc1[6]
1997760 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
0 -
So the volume is indeed gone. Let's have a look at the raidmembers:
<div>su</div><div><br></div><div>mdadm --examine /dev/sd[abcd]3</div>
After 'su' it will ask you for your password again. It's elevating your login from 'admin' to 'root'.0 -
Already a big thanks for your time (gezien je nick, bedankt voor je tijd )
This is the output:
~ # mdadm --examine /dev/sd[abcd]3
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x2
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848150464 (5577.23 GiB 5988.51 GB)
Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Recovery Offset : 2193056496 sectors
State : active
Device UUID : 08268038:19d12420:f4d51e5e:a935b818
Update Time : Tue Aug 7 11:15:24 2018
Checksum : 99528fda - correct
Events : 8397
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 66638459:c9900aa2:c3173028:8ac7f14d
Update Time : Tue Aug 7 12:51:32 2018
Checksum : b8d11f5a - correct
Events : 10878
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : .AAA ('A' == active, '.' == missing)
0 -
This is not complete. It shows only the raid headers of /dev/sda3 and /dev/sdb2. There should also be a /dev/sdc3 and /dev/sdd3.
Yet this already tells something went wrong between Tue Aug 7 11:15:24 2018 and Tue Aug 7 12:51:32 2018 (UTC) being the update times of both devices.
At Tue Aug 7 11:15:24 2018 /dev/sda3 has recorded that the array state is AAAA. So all for members were alive and kicking.
At Tue Aug 7 12:51:32 2018 /dev/sdb3 has recorded the state .AAA. So at that moment /dev/sda3 was already dropped from the array, and the array was degraded. I think the headers of /dev/sdc3 and /dev/sdd3 will show that /dev/sdb3 is also dropped, bringing the array down.
Does that timestamp ring a bell? Have you written anything to the array after Aug 7 12:51:32?
0 -
Indeed, it was incomplete, I'll paste again below.
I've got a warning mail 2 nights ago that the array was degraded due to an I/O error on disk1. When I checked the disks they were all healthy so I assumed there was a problem with the array itself. That's why I've let the NAS repair the array. After the repair (around noon) it still said it was degraded. As the disks were still in perfect health I thought it might have been a FW-issue that the device was thinking that there was something wrong. So I did the update. When the device was rebooted I was "welcomed" with the question to setup a volume.
Error List:1 2018-08-06 16:09:35 crit storage Detected Disk1 I/O error. 1 2018-08-07 00:37:49 alert storage There is a RAID Degraded. 1 2018-08-07 00:50:19 alert storage There is a RAID Degraded. 1 2018-08-07 12:15:42 crit storage Detected Disk1 I/O error. /dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x2
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848150464 (5577.23 GiB 5988.51 GB)
Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Recovery Offset : 2193056496 sectors
State : active
Device UUID : 08268038:19d12420:f4d51e5e:a935b818
Update Time : Tue Aug 7 11:15:24 2018
Checksum : 99528fda - correct
Events : 8397
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 66638459:c9900aa2:c3173028:8ac7f14d
Update Time : Tue Aug 7 12:51:32 2018
Checksum : b8d11f5a - correct
Events : 10878
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : .AAA ('A' == active, '.' == missing)
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d1ace266:039ced6e:7e8e15cf:fde9814d
Update Time : Tue Aug 7 12:51:32 2018
Checksum : 39880d2f - correct
Events : 10878
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : .AAA ('A' == active, '.' == missing)
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d9248da6:15c80681:a3feb907:e643fa7d
Update Time : Tue Aug 7 12:51:32 2018
Checksum : f4687b57 - correct
Events : 10878
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : .AAA ('A' == active, '.' == missing)
0 -
According to the headers, the array is not down:
<div>/dev/sda3:<br>Device Role : Active device 0<br>Update Time : Tue Aug 7 11:15:24 2018<br>Array State : AAAA ('A' == active, '.' == missing)<br><br>/dev/sdb3:<br>Device Role : Active device 1<br>Update Time : Tue Aug 7 12:51:32 2018<br>Array State : .AAA ('A' == active, '.' == missing)<br><br>/dev/sdc3:<br>Device Role : Active device 2<br>Update Time : Tue Aug 7 12:51:32 2018<br>Array State : .AAA ('A' == active, '.' == missing)<br><br>/dev/sdd3:<br>Device Role : Active device 3<br>Update Time : Tue Aug 7 12:51:32 2018<br>Array State : .AAA ('A' == active, '.' == missing)<br></div><div></div>
Device 1,2 and 3 agree that device 0 is dropped, but also agree that they still are a degraded array.You should be able to assemble the array:<div>su</div><div><br></div><div>mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run<br></div><div></div>
Don't know why it failed for the firmware. Maybe mdadm will tell.
0 -
Hi,
I did what you've said and it says that it's started with the 3 disks. But when I try to log in on the GUI my login credentials are incorrect.
In the terminal nothing was happening anymore as I could input new data but is it possible that the device is rebuilding the volume in the background and that it takes some time before it's completed and that I can log in again?
Again, thanks for all the time you're putting into this!
This is the examine output I'm getting after the assembly~ # mdadm --examine /dev/sd[abcd]3
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x2
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848150464 (5577.23 GiB 5988.51 GB)
Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Recovery Offset : 2193056496 sectors
State : active
Device UUID : 08268038:19d12420:f4d51e5e:a935b818
Update Time : Tue Aug 7 11:15:24 2018
Checksum : 99528fda - correct
Events : 8397
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 66638459:c9900aa2:c3173028:8ac7f14d
Update Time : Tue Aug 7 12:51:32 2018
Checksum : b8d11f5a - correct
Events : 10878
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : .AAA ('A' == active, '.' == missing)
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d1ace266:039ced6e:7e8e15cf:fde9814d
Update Time : Tue Aug 7 12:51:32 2018
Checksum : 39880d2f - correct
Events : 10878
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : .AAA ('A' == active, '.' == missing)
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b
Name : NAS540:2 (local to host NAS540)
Creation Time : Mon Dec 29 12:51:05 2014
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
Array Size : 5848151040 (5577.23 GiB 5988.51 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d9248da6:15c80681:a3feb907:e643fa7d
Update Time : Tue Aug 7 12:51:32 2018
Checksum : f4687b57 - correct
Events : 10878
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : .AAA ('A' == active, '.' == missing)
0
Categories
- All Categories
- 415 Beta Program
- 2.4K Nebula
- 148 Nebula Ideas
- 96 Nebula Status and Incidents
- 5.7K Security
- 262 USG FLEX H Series
- 271 Security Ideas
- 1.4K Switch
- 74 Switch Ideas
- 1.1K Wireless
- 40 Wireless Ideas
- 6.4K Consumer Product
- 249 Service & License
- 387 News and Release
- 84 Security Advisories
- 29 Education Center
- 10 [Campaign] Zyxel Network Detective
- 3.5K FAQ
- 34 Documents
- 34 Nebula Monthly Express
- 85 About Community
- 73 Security Highlight