sdm: Error mounting after successful burn rpi5 NVMe

Disclaimer: I am not an expert at linux, but I think this might be interesting to let you know. I am using an RPi 5 with a Pimoroni NVMe bottom HAT and a Patriot P310 240Gb NVMe SSD. The burn seems to be almost successful. After burning there is an error mounting the image, if I understand correctly. The (headless) pi CAN now boot from the NVMe drive and the requested user is created with the set password and I can login via the enabled SSH. However, the hostname is not set to what I specified and still at raspberrypi.

Here’s what I did:

I customised an image with some basic settings (appears successful to me, have output if needed): $ sudo sdm --customize --plugin user:"adduser=me|prompt" --plugin L10n:host --plugin disables:piwiz --regen-ssh-host-keys --restart 2023-12-11-bookworm-arm64-lite-my-custom-image.img

Then burned the image: $ sudo sdm --burn /dev/nvme0n1 --hostname pi --expand-root --regen-ssh-host-keys 2023-12-11-bookworm-arm64-lite-my-custom-image.img

The following output showed:

* Burn '2023-12-11-bookworm-arm64-lite-my-custom-image.img' (2.7GB, 2.6GiB) to '/dev/nvme0n1'...
dd if=2023-12-11-bookworm-arm64-lite-my-custom-image.img of=/dev/nvme0n1 status=progress bs=16M iflag=direct
2734686208 bytes (2.7 GB, 2.5 GiB) copied, 26 s, 105 MB/s
163+1 records in
163+1 records out
2738880512 bytes (2.7 GB, 2.6 GiB) copied, 26.817 s, 102 MB/s
> Expand Root: Expand partition 'nvme0n1p2' on device '/dev/nvme0n1' from (2.2GB, 2.0GiB) to (239.5GB, 223.1GiB)
* Mount /dev/nvme0n1 to resize the root file system
* Mount device '/dev/nvme0n1'
mount: /mnt/sdm: special device /dev/nvme0n12 does not exist.
       dmesg(1) may have more information after failed mount system call.
? Error mounting IMG '/dev/nvme0n1'

I check the devices:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
mmcblk0     179:0    0  58.2G  0 disk
├─mmcblk0p1 179:1    0   512M  0 part /boot/firmware
└─mmcblk0p2 179:2    0  57.7G  0 part /
nvme0n1     259:0    0 223.6G  0 disk
├─nvme0n1p1 259:3    0   512M  0 part
└─nvme0n1p2 259:4    0 223.1G  0 part

dmesg output shown below due to length.

Is the device name maybe incorrect somehow? device /dev/nvme0n12

dmesg output (last parts only):

$ dmesg
{--snip--}
[12522.986819] loop0: detected capacity change from 0 to 4292608
[12522.990990] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none.
[12522.994237] loop1: detected capacity change from 0 to 1048576
[12649.728228] EXT4-fs (loop0): unmounting filesystem.
[12745.919759] pcieport 0000:00:00.0: AER: Corrected error received: 0000:00:00.0
[12745.919805] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[12745.919810] pcieport 0000:00:00.0:   device [14e4:2712] error status/mask=00001000/00002000
[12745.919817] pcieport 0000:00:00.0:    [12] Timeout
[12760.016392] pcieport 0000:00:00.0: AER: Corrected error received: 0000:00:00.0
[12760.016431] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[12760.016435] pcieport 0000:00:00.0:   device [14e4:2712] error status/mask=00001000/00002000
[12760.016439] pcieport 0000:00:00.0:    [12] Timeout
[12760.018105] pcieport 0000:00:00.0: AER: Corrected error received: 0000:00:00.0
[12760.018112] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[12760.018113] pcieport 0000:00:00.0:   device [14e4:2712] error status/mask=00001000/00002000
[12760.018115] pcieport 0000:00:00.0:    [12] Timeout
[12768.270624] pcieport 0000:00:00.0: AER: Corrected error received: 0000:00:00.0
[12768.270636] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[12768.270639] pcieport 0000:00:00.0:   device [14e4:2712] error status/mask=00001000/00002000
[12768.270642] pcieport 0000:00:00.0:    [12] Timeout
[12768.274330] pcieport 0000:00:00.0: AER: Corrected error received: 0000:00:00.0
[12768.274342] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[12768.274344] pcieport 0000:00:00.0:   device [14e4:2712] error status/mask=00001000/00002000
[12768.274347] pcieport 0000:00:00.0:    [12] Timeout
[12768.276632] pcieport 0000:00:00.0: AER: Corrected error received: 0000:00:00.0
[12768.276636] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[12768.276638] pcieport 0000:00:00.0:   device [14e4:2712] error status/mask=00001000/00002000
[12768.276641] pcieport 0000:00:00.0:    [12] Timeout
[12768.277541] pcieport 0000:00:00.0: AER: Corrected error received: 0000:00:00.0
[12768.277547] pcieport 0000:00:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[12768.277549] pcieport 0000:00:00.0:   device [14e4:2712] error status/mask=00001000/00002000
[12768.277552] pcieport 0000:00:00.0:    [12] Timeout
[12768.929238]  nvme0n1: p1 p2
[12770.080733]  nvme0n1: p1 p2
[12770.099507] /dev/nvme0n12: Can't open blockdev

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Comments: 32 (18 by maintainers)

Most upvoted comments

Excellent! And…apologies for the duffs/hassles along the way, and thank you for sticking with me on getting this working for you.

Hi, me again. xD

It was good I read this because I was thinking of forking and sending 1 or 2 requests. There are things I could provide knowledge wise that could speed up the “burn” process by hundreds of % if you are interested? (rsync instead of dd, like I do in my script)

I also saw you are doing some unnecessary operations that I could help out with if you’d like. (partition resizing and stuff)

But you should def make the changes you have in mind first because I might need to move a few things around to do what I have in mind.

Please don’t hijack other threads…please start a new Issue (or Discussion if you’d prefer) where we can chat about this.

I’m interested in your code to understand the code complexity, tradeoffs, possible issues, etc. as well as proven performance data. I’d much prefer a script that I can download/use on the side than something you’ve integrated into sdm.

Thx.

But anyway now it is all working as expected. Any suggestion how I can set another partition on that disk using the burn plugin that expands to fill the whole free space? It’s not fully clear from the docs to me, only how to expand root. I have about 630GB left that I want to configure to the last byte, but would like to be able to do it without calculating all clusters or guessing etc.

Glad that you got the cparams file created. Not sure what’s going on there.

Re creating a “last partition” that fills the disk: There’s no way to do that at the moment, but there should be. I’ve got the parted plugin open to add GPT disk support, so will look into adding this “makes perfect sense” feature.