kimme  Freshman Member

Comments

  • Something particular to look for? It's a huge list.
  • Back to the previous topic... My NAS just started beeping. Checked the GUI and it's degraded again. When I check the disks, they're all green and healthy. Anything I can check?
  • Great! it's installing now :) thanks!
  • Sorry I didn't link it, it was a couple of posts next to this one. I just tried your method but this also doesn't add the repository into the GUI. /i-data/4ebe74b4/admin/zy-pkgs $ ls web_prefix.txt The text-file should be in the correct place or am I wrong? EDIT: Also tried your file with .dms extension, same result.
  • Backup is running as we speak ;) As for Plex i've been trying to get it on my NAS but sadly I can't install the repository. I've added it to the correct folder, renamed it,... I did everything as described in the other post but it won't show up in the apps after refreshing them...
  • I have about the same issue as Tipcsi. I've added MetaRepository.zpkg to /i-data/sysvol/admin/zy-pkgs but when I try to retrieve the list it just refreshes the standard packages.
  • Mijzelf, you're the hero of the day! All the data is again where it should be! You've saved me more than 10 years of pictures/memories. When I check the status of the disks I can't find any errors, the array is also healthy again. I don't know what caused it to be degraded tho. PS: While browsing this forum I also saw you…
  • ok, thanks, I'll check tomorrow when the rebuild is finished but for as far as I know, all disks showed a green, healthy status.
  • 2nd try assemble: ~ # mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run mdadm: /dev/md2 has been started with 3 drives (out of 4). Manually: ~ # mdadm --manage /dev/md2 --add /dev/sda3 mdadm: added /dev/sda3 ~ # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md2 : active raid5…
  • $ dmesg | grep sda [ 20.746660] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 20.754443] sd 0:0:0:0: [sda] 4096-byte physical blocks [ 20.760218] sd 0:0:0:0: [sda] Write Protect is off [ 20.765039] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 20.770639] sd 0:0:0:0: [sda] Write cache: enabled,…
  • After the reboot I can log back in so that's solved again. / $ cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sda2[4] sdd2[3] sdb2[5] sdc2[6] 1998784 blocks super 1.2 [4/4] [UUUU] md0 : active raid1 sda1[4] sdd1[3] sdb1[5] sdc1[6] 1997760 blocks super 1.2 [4/4]…
  • Hi, I did what you've said and it says that it's started with the 3 disks. But when I try to log in on the GUI my login credentials are incorrect. In the terminal nothing was happening anymore as I could input new data but is it possible that the device is rebuilding the volume in the background and that it takes some time…
  • Indeed, it was incomplete, I'll paste again below. I've got a warning mail 2 nights ago that the array was degraded due to an I/O error on disk1. When I checked the disks they were all healthy so I assumed there was a problem with the array itself. That's why I've let the NAS repair the array. After the repair (around…
  • Already a big thanks for your time (gezien je nick, bedankt voor je tijd ;) ) This is the output: ~ # mdadm --examine /dev/sd[abcd]3 /dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x2 Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b Name : NAS540:2 (local to host NAS540) Creation Time : Mon Dec 29 12:51:05 2014…
  • don't know if this is what you need but here we go :) / $ cat /proc/partitions major minor #blocks name 7 0 147456 loop0 31 0 256 mtdblock0 31 1 512 mtdblock1 31 2 256 mtdblock2 31 3 10240 mtdblock3 31 4 10240 mtdblock4 31 5 112640 mtdblock5 31 6 10240 mtdblock6 31 7 112640 mtdblock7 31 8 6144 mtdblock8 8 0 1953514584 sda…
Avatar