NAS542 OMV bookworm/bullseye install issue

Hi,

I know there are lots of questions related to OMV install issues already and I've probably read most of them by now but none explains the issue I'm facing. And I'm hoping there is an easy solution. In short, bookworm ends in a kernel panic, bullseye boots to shell but without ip address and ssh server. The probably most relevant part of serial output (bookworm 24.093, kernel 6.6.32):

[    3.570000] ubi0: attaching mtd2
[ 3.580000] ubi0: scanning is finished
[ 3.580000] ubi0 error: ubi_read_volume_table: the layout volume was not found
[ 3.590000] ubi0 error: ubi_attach_mtd_dev: failed to attach mtd2, error -22
[ 3.600000] UBI error: cannot attach mtd2
[ 3.600000] clk: Disabling unused clocks
[ 3.740000] ata1: SATA link down (SStatus 0 SControl 300)
[ 3.850000] usb 1-1: new high-speed USB device number 2 using xhci-hcd
[ 3.860000] ata3: SATA link down (SStatus 0 SControl 300)
[ 3.870000] ata4: SATA link down (SStatus 0 SControl 300)
[ 4.100000] hub 1-1:1.0: USB hub found
[ 4.100000] hub 1-1:1.0: 4 ports detected
[ 4.100000] ata2: SATA link down (SStatus 0 SControl 300)
[ 4.110000] md: Waiting for all devices to be available before autodetect
[ 4.120000] md: If you don't use raid, use raid=noautodetect
[ 4.160000] md: Autodetecting RAID arrays.
[ 4.170000] md: autorun ...
[ 4.170000] md: ... autorun DONE.
[ 4.170000] VFS: Cannot open root device "ubi0:rootfs" or unknown-block(0,253): error -19
[ 4.180000] Please append a correct "root=" boot option; here are the available partitions:
[ 4.190000] 1f00 256 mtdblock0
[ 4.190000] (driver?)
[ 4.200000] 1f01 512 mtdblock1
[ 4.200000] (driver?)
[ 4.200000] usb 2-1: new SuperSpeed USB device number 2 using xhci-hcd
[ 4.200000] 1f02 256 mtdblock2
[ 4.210000] (driver?)
[ 4.220000] List of all bdev filesystems:
[ 4.220000] ext3
[ 4.220000] ext4
[ 4.220000] squashfs
[ 4.220000] vfat
[ 4.230000] msdos
[ 4.230000] hfsplus
[ 4.230000]
[ 4.230000] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,253)
[ 4.240000] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 6.6.32+nas5xx #nas5xx
[ 4.250000] Hardware name: Zyxel NAS5xx
[ 4.250000] unwind_backtrace from show_stack+0xb/0xc
[ 4.260000] show_stack from dump_stack_lvl+0x2b/0x34
[ 4.260000] dump_stack_lvl from panic+0xc3/0x25c
[ 4.270000] panic from mount_root_generic+0x18f/0x20c
[ 4.270000] mount_root_generic from prepare_namespace+0x15f/0x1aa
[ 4.280000] prepare_namespace from kernel_init_freeable+0x1dd/0x1ee
[ 4.290000] kernel_init_freeable from kernel_init+0x15/0xec
[ 4.290000] kernel_init from ret_from_fork+0x11/0x1c
[ 4.300000] Exception stack(0xc081dfb0 to 0xc081dff8)
[ 4.300000] dfa0: 00000000 00000000 00000000 00000000
[ 4.310000] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 4.320000] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000
[ 4.330000] CPU0: stopping
[ 4.330000] CPU: 0 PID: 61 Comm: kworker/0:2 Not tainted 6.6.32+nas5xx #nas5xx
[ 4.330000] Hardware name: Zyxel NAS5xx
[ 4.330000] Workqueue: events dbs_work_handler
[ 4.330000] unwind_backtrace from show_stack+0xb/0xc
[ 4.330000] show_stack from dump_stack_lvl+0x2b/0x34
[ 4.330000] dump_stack_lvl from do_handle_IPI+0x159/0x180
[ 4.330000] do_handle_IPI from ipi_handler+0x13/0x18
[ 4.330000] ipi_handler from handle_percpu_devid_irq+0x55/0x144
[ 4.330000] handle_percpu_devid_irq from generic_handle_domain_irq+0x17/0x20
[ 4.330000] generic_handle_domain_irq from gic_handle_irq+0x5f/0x70
[ 4.330000] gic_handle_irq from generic_handle_arch_irq+0x27/0x34
[ 4.330000] generic_handle_arch_irq from call_with_stack+0xd/0x10
[ 4.330000] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,253) ]---

For context, I

  • followed https://seafile.servator.de/nas/zyxel/images/nas5xx-debian-howto.txt beginning with A2 and using bookworm 24.093, which led to a boot loop
  • opened the case, connected to serial and saw the kernel panicking
  • changed /env/config to boot from kernel1
  • tried bullseye 24.093
  • updated the stock firmware (now kernel2 is stock)
  • tried flashing either of vmlinuz provided by bookworm/bullseye using /usr/local/bin/zy-kernel1-write (adjusted zy-kernel2-write using mtd4 instead of 6)
  • tried kernel 6.6.32 and 6.1.92: bash ./install-linux.sh then, then flash vmlinuz
  • tried flashing initramfs to mtd5

With kernel 6.x or stock (3.2.54) and bookworm it can't mount rootfs and panics. With stock and bullseye it boots to shell but isn't usable. Thanks in advance for any hint or explanation as to why!

All Replies

  • Mijzelf
    Mijzelf Posts: 2,751  Guru Member
    250 Answers 2500 Comments Friend Collector Seventh Anniversary

    Do you have any information about that kernels > 3.2.54? The SoC has no support in the vanilla kernel, and so it has to be a custom one. I know someone is trying to add support in some github project, but last time I looked at it, it was far from complete.

    Seeing your bootlog it lacks support for nand, and network:

    [ 4.190000] 1f00 256 mtdblock0
    [ 4.190000] (driver?)

    [ 2.740000] e1000e: Intel(R) PRO/1000 Network Driver
    [ 2.740000] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
    [ 2.750000] igb: Intel(R) Gigabit Ethernet Network Driver
    [ 2.760000] igb: Copyright (c) 2007-2014 Intel Corporation.

    (The SoC has definitely not an Intel NIC). So the only way to do something useful with this kernel is to not access nand from linux (and boot from harddisk directly, if bareboot supports that), and use an USB network card.

  • notEvil
    notEvil Posts: 4
    First Comment Friend Collector

    bullseye provides vmlinuz-3.2.0-6-nas5xx and vmlinuz-6.1.82+nas5xx at /boot, and I got 6.6.32 from https://seafile.servator.de/nas/zyxel/kernel/. Since all of them end with nas5xx I assumed they are fine.

  • irvingkay
    irvingkay Posts: 1
    First Comment Friend Collector
    edited July 3

    @bloxd io

    Hello, how would you recommend I determine which kernel version is most suitable for my specific nas5xx hardware and use case?

Consumer Product Help Center