terraform-provider-proxmox: VMs with pass-through disks seem to crash plugin
After using this plugin to provision my VMs from cloud-init template, I pass-through multiple drives to different VMs for rook/ceph cluster nodes using ansible scripts.
I noticed after I started passing through drives the plugin would usually crash and does not allow me to apply new changes.
My VMs with ID 31[1:3] are the ones with pass-through drives. Was not able to diagnose much on my own due to lack of log messages. Please see below / lmk if there is anything else I can provide - ty.
Here's my output after terraform apply using plugin v2.9.4
proxmox_vm_qemu.k8-storage-agent[2]: Refreshing state... [id=pve/qemu/313]
proxmox_vm_qemu.k8-storage-agent[0]: Refreshing state... [id=pve/qemu/311]
proxmox_vm_qemu.k8-storage-agent[1]: Refreshing state... [id=pve/qemu/312]
proxmox_vm_qemu.kube-agent[0]: Refreshing state... [id=pve/qemu/111]
proxmox_vm_qemu.kube-agent[2]: Refreshing state... [id=pve/qemu/113]
proxmox_vm_qemu.kube-agent[1]: Refreshing state... [id=pve/qemu/112]
proxmox_vm_qemu.k8-storage-server[1]: Refreshing state... [id=pve/qemu/302]
proxmox_vm_qemu.k8-storage-server[0]: Refreshing state... [id=pve/qemu/301]
proxmox_vm_qemu.kube-server[2]: Refreshing state... [id=pve/qemu/103]
proxmox_vm_qemu.k8-storage-server[2]: Refreshing state... [id=pve/qemu/303]
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.k8-storage-server[0],
│ on k8s-rook.tf line 1, in resource "proxmox_vm_qemu" "k8-storage-server":
│ 1: resource "proxmox_vm_qemu" "k8-storage-server" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.k8-storage-server[1],
│ on k8s-rook.tf line 1, in resource "proxmox_vm_qemu" "k8-storage-server":
│ 1: resource "proxmox_vm_qemu" "k8-storage-server" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.k8-storage-server[2],
│ on k8s-rook.tf line 1, in resource "proxmox_vm_qemu" "k8-storage-server":
│ 1: resource "proxmox_vm_qemu" "k8-storage-server" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.k8-storage-agent[0],
│ on k8s-rook.tf line 44, in resource "proxmox_vm_qemu" "k8-storage-agent":
│ 44: resource "proxmox_vm_qemu" "k8-storage-agent" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.k8-storage-agent[1],
│ on k8s-rook.tf line 44, in resource "proxmox_vm_qemu" "k8-storage-agent":
│ 44: resource "proxmox_vm_qemu" "k8-storage-agent" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.k8-storage-agent[2],
│ on k8s-rook.tf line 44, in resource "proxmox_vm_qemu" "k8-storage-agent":
│ 44: resource "proxmox_vm_qemu" "k8-storage-agent" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Request cancelled
│
│ with proxmox_vm_qemu.k8s-lb[0],
│ on lb.tf line 2, in resource "proxmox_vm_qemu" "k8s-lb":
│ 2: resource "proxmox_vm_qemu" "k8s-lb" {
│
│ The plugin.(*GRPCProvider).ValidateResourceConfig request was cancelled.
╵
╷
│ Error: Request cancelled
│
│ with proxmox_vm_qemu.k8s-lb[1],
│ on lb.tf line 2, in resource "proxmox_vm_qemu" "k8s-lb":
│ 2: resource "proxmox_vm_qemu" "k8s-lb" {
│
│ The plugin.(*GRPCProvider).ValidateResourceConfig request was cancelled.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.kube-server[2],
│ on main.tf line 65, in resource "proxmox_vm_qemu" "kube-server":
│ 65: resource "proxmox_vm_qemu" "kube-server" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Request cancelled
│
│ with proxmox_vm_qemu.kube-server[0],
│ on main.tf line 65, in resource "proxmox_vm_qemu" "kube-server":
│ 65: resource "proxmox_vm_qemu" "kube-server" {
│
│ The plugin.(*GRPCProvider).UpgradeResourceState request was cancelled.
╵
╷
│ Error: Request cancelled
│
│ with proxmox_vm_qemu.kube-server[1],
│ on main.tf line 65, in resource "proxmox_vm_qemu" "kube-server":
│ 65: resource "proxmox_vm_qemu" "kube-server" {
│
│ The plugin.(*GRPCProvider).UpgradeResourceState request was cancelled.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.kube-agent[2],
│ on main.tf line 108, in resource "proxmox_vm_qemu" "kube-agent":
│ 108: resource "proxmox_vm_qemu" "kube-agent" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.kube-agent[0],
│ on main.tf line 108, in resource "proxmox_vm_qemu" "kube-agent":
│ 108: resource "proxmox_vm_qemu" "kube-agent" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with proxmox_vm_qemu.kube-agent[1],
│ on main.tf line 108, in resource "proxmox_vm_qemu" "kube-agent":
│ 108: resource "proxmox_vm_qemu" "kube-agent" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-proxmox_v2.9.4 plugin:
panic: interface conversion: interface {} is []interface {}, not map[string]interface {}
goroutine 47 [running]:
github.com/Telmate/proxmox-api-go/proxmox.(*Client).GetStorageStatus(0xb27400, 0xc00082d638, {0x0, 0x0})
github.com/Telmate/proxmox-api-go@v0.0.0-20211123192920-062fd1a6ab10/proxmox/client.go:264 +0x20a
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc000306600, 0xc00059b638)
github.com/Telmate/proxmox-api-go@v0.0.0-20211123192920-062fd1a6ab10/proxmox/config_qemu.go:763 +0x268a
github.com/Telmate/terraform-provider-proxmox/proxmox._resourceVmQemuRead(0xc0005bc900, {0xab4220, 0xc00033b130})
github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1265 +0x39e
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuRead(0x68, {0xab4220, 0xc00033b130})
github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1237 +0x25
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xbd2b7b, {0xcd9d18, 0xc000612b80}, 0x24, {0xab4220, 0xc00033b130})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.10.1/helper/schema/resource.go:346 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).RefreshWithoutUpgrade(0xc0003a6460, {0xcd9d18, 0xc000612b80}, 0xc00024f380, {0xab4220, 0xc00033b130})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.10.1/helper/schema/resource.go:635 +0x35b
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadResource(0xc00011c090, {0xcd9d18, 0xc000612b80}, 0xc000612c00)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.10.1/helper/schema/grpc_provider.go:576 +0x534
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadResource(0xc000126700, {0xcd9dc0, 0xc0003f9e30}, 0xc0002a6ba0)
github.com/hashicorp/terraform-plugin-go@v0.5.0/tfprotov5/tf5server/server.go:553 +0x3b0
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler({0xba8f20, 0xc000126700}, {0xcd9dc0, 0xc0003f9e30}, 0xc0002a6b40, 0x0)
github.com/hashicorp/terraform-plugin-go@v0.5.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:344 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00013c700, {0xce7150, 0xc000212180}, 0xc0005c79e0, 0xc0003f8b10, 0x11a49f0, 0x0)
google.golang.org/grpc@v1.42.0/server.go:1282 +0xccf
google.golang.org/grpc.(*Server).handleStream(0xc00013c700, {0xce7150, 0xc000212180}, 0xc0005c79e0, 0x0)
google.golang.org/grpc@v1.42.0/server.go:1616 +0xa2a
google.golang.org/grpc.(*Server).serveStreams.func1.2()
google.golang.org/grpc@v1.42.0/server.go:921 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/grpc@v1.42.0/server.go:919 +0x294
Error: The terraform-provider-proxmox_v2.9.4 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
Debug log-level does not seem to contain much
[90mTue, 11 Jan 2022 20:05:45 -0500[0m [32mINF[0m [1mgithub.com/Telmate/terraform-provider-proxmox/proxmox/util.go:155[0m[36m >[0m Enabling the capture of log-library logs as ithe _capturelog flag was detected
[90mTue, 11 Jan 2022 20:05:45 -0500[0m [32mINF[0m [1mgithub.com/Telmate/terraform-provider-proxmox/proxmox/util.go:188[0m[36m >[0m Logging Started. Root Logger Set to level debug
[90mTue, 11 Jan 2022 20:05:45 -0500[0m [32mINF[0m [1mgithub.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1254[0m[36m >[0m Reading configuration for vmid [36mloggerName=[0mresource_vm_read [36mvmid=[0m313
[90mTue, 11 Jan 2022 20:05:45 -0500[0m [32mINF[0m [1mgithub.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1254[0m[36m >[0m Reading configuration for vmid [36mloggerName=[0mresource_vm_read [36mvmid=[0m312
[90mTue, 11 Jan 2022 20:05:45 -0500[0m [32mINF[0m [1mgithub.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1254[0m[36m >[0m Reading configuration for vmid [36mloggerName=[0mresource_vm_read [36mvmid=[0m111
[90mTue, 11 Jan 2022 20:05:45 -0500[0m [32mINF[0m [1mgithub.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1254[0m[36m >[0m Reading configuration for vmid [36mloggerName=[0mresource_vm_read [36mvmid=[0m302
[90mTue, 11 Jan 2022 20:05:45 -0500[0m [32mINF[0m [1mgithub.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:1254[0m[36m >[0m Reading configuration for vmid [36mloggerName=[0mresource_vm_read [36mvmid=[0m301
When I remove VMs 31[1:3] from the backing tfstate file and comment out related proxmox_vm_qemu resource block the plugin advances with no problem.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 4
- Comments: 18
I can also confirm the same behaviour as @jakeschurch and @gnouts.
My terraform scripts were working perfectly before. Then I ran the following command on the proxmox server to passthrough a disk to one of the VMs:
After that I immediately ran
terraform planwhich resulted in the following error:Note that this error is returned instantaneously, definitely not after a 60 second timeout, so this is not the same issue that @david-guenault is talking about.
Which PR closes this issue?
The issue persists with provider version 2.9.14.
why is this closed , still happening … and i see all linked complains are also closed ?
I’m not convinced @david-guenault has the same issue. I do not experience timeout when provisionning, VMs already exist and terraform exit nearly instantly on the check. I mean : it even fails on
terraform plan.For clarity, this is what I did to passthrough disks, which lead to this issue : https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)
Hello there ! I encountered this problem and it is definitively not a plugin problem. It is something related to timeout cloning in proxmox. The timeout is hardcoded in proxmox and cannot be changed. I was trying to provision 3 vms on a node (intel nuc with 1GB ethernet) on an nfs storage (synology with 2x1GB ethernet) from a template located on a local storage. Was trying a lot of thing until i check NFS configuration on proxmox. I just disabled preallocation and everything went fine. I was able to deploy without any error. It is a performance problem and proxmox choice to not allow cloning time greater than one minute (blah blah deadlock blah blah 😃).
You can check the source for hardcoded timeout here: https://forum.proxmox.com/threads/unable-to-create-image-storage-nas-locked-command-timed-out-aborting.98274/
Here is the related post: ` Oct 20, 2021 #6 For locked storage operations there is a hardcoded 60 second timeout, because the cluster file system automatically releases locks after a while (to prevent deadlocks). The locks are necessary to guarantee that different operation on the storage do not clash with each other (e.g. guarantee that a disk name is not already used).
I don’t see an easy workaround except the two described above: either use the new feature to turn off preallocation or convert the disk to raw before cloning. `