terraform-provider-vsphere: Multiple disks specified in clone errors and resource doesn't power on

Community Guidelines

  • I have read and agree to the HashiCorp Community Guidelines .
  • Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
  • Do not leave “+1” or other comments that do not add relevant information or questions.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Terraform Version

v1.1.4

vSphere Provider Version

2.0.2

vSphere Version

7.0.3, also tried on 6.7

Affected Resource(s)

vsphere_virtual_machine

Terraform Configuration Files

resource "vsphere_virtual_machine" "cloned_virtual_machine" {
    name = var.vsphere_virtual_machine_name
    resource_pool_id = data.vsphere_resource_pool.pool.id
    datastore_id = data.vsphere_datastore.datastore.id
    depends_on = [
      data.vsphere_resource_pool.pool
    ]

    num_cpus = var.vsphere_num_cpus
    cpu_hot_add_enabled = true
    memory = var.vsphere_memory
    memory_hot_add_enabled = true
    guest_id = data.vsphere_virtual_machine.template.guest_id
    folder = var.vsphere_folder
    scsi_type = data.vsphere_virtual_machine.template.scsi_type

    network_interface {
      network_id = data.vsphere_network.network.id
      adapter_type = "vmxnet3"
    }

    
    dynamic "disk" {
        for_each = var.vsphere_disks
        content {
            label = "disk${disk.value["number"]}"
            size = disk.value["size"]
            unit_number = disk.value["number"]
        }
    }

    clone {
        template_uuid = data.vsphere_virtual_machine.template.id

        customize {
          network_interface {
            ipv4_address = var.vsphere_ipaddress
            ipv4_netmask = var.vsphere_netmask
            dns_server_list = var.vsphere_dns_servers
          }
          ipv4_gateway = var.vsphere_ipgateway

          windows_options {
            computer_name = var.vsphere_virtual_machine_name
            admin_password = "-------"
            join_domain = "removed"
            domain_admin_user = var.domainadminuser
            domain_admin_password = var.domainadminpassword
            time_zone = 35
          }
          timeout = 30  //some VMs hang on "Getting Started" in Windows and need this extended timeout

        }
    }    
}

I’m using a dynamic block but the tf plan shows the expected attributes that will be used. Disk size, provisioning style, all parameters are correct, so it’s nothing to do with the dynamic block. I could have just typed the below array into the module. This is the block I’m sending in for disk:

vsphere_disks = [
        {
            number = 0
            size = 75
        },
        {
            number = 1
            size = 100
        }
]

Debug Output

No negative output except “Unable to enumerate all disks.” That’s because the second hard drive has size 0MB and a different provisioning type and the VM can’t startup.

Panic Output

No panic

Expected Behavior

The VM should have had both hard drives as defined in the code. Disk 1, 75GB, Disk 2, 100GB

Actual Behavior

First disk is created as expected, second disk is 0MB and thick provisioned lazy zeroed

Steps to Reproduce

terraform apply which will show the correct values to be used for creation

Important Factoids

I’m using local execution.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 9
  • Comments: 36

Most upvoted comments

We have the same issue but only from pipeline , i when i run it from my machine all fine , but in the pipeline same error will show a side note is that we install the vsphere provider manually to the server version: 2.4.0 this is the .terraform.d/plugins/registry.terraform.io/hashicorp/vsphere/2.4.0/linux_386/terraform-provider-vsphere_v2.4.0_x5 I don’t know how this is related but my machine and runner pipeline version are exactly the same the only different is that i use linux_am64 , runner use linux_386.

(Workaround at the bottom - not the most elegant one though) It happens whether cloning a VM or creating a new one with multiple disks. If you create\clone with a single disk it works fine. However, any additional disk will cause this issue. The vmdk is not being created at all yet the vmx file point to that non-existing vmdk and throwing the error. For some reason, the provider does not create any additional vmdks.

resource "vsphere_virtual_machine" "vm" {
    name             = var.new_vm_name
    resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
    datastore_id     = data.vsphere_datastore.datastore.id
    folder = "${var.target_folder}"
    num_cpus         = 8
    num_cores_per_socket = 8
    memory           = 1024
    guest_id         = data.vsphere_virtual_machine.template.guest_id
    scsi_type        = data.vsphere_virtual_machine.template.scsi_type
    network_interface {
        network_id = data.vsphere_network.network.id
        adapter_type = "vmxnet3"
    }
    disk {
        label = "disk0"
        size = data.vsphere_virtual_machine.template.disks.0.size
        thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
        unit_number = 0
    }
    disk {
        label = "disk1"
        size = 100
        thin_provisioned = true
        unit_number = 1
    }
    clone {
      template_uuid = data.vsphere_virtual_machine.template.id
    }
}

provider version 2.3.1

Debug log shows like the provider creates corrupted plan for the 2nd disk

2023-05-04T11:17:35.062+0300 [WARN]  Provider "registry.terraform.io/hashicorp/vsphere" produced an invalid plan for vsphere_virtual_disk.vmdk1, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .adapter_type: planned value cty.StringVal("lsiLogic") for a non-computed attribute
2023-05-04T11:17:35.073+0300 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"

Workaround

Create \ Clone a VM with a single vmdk as usual

resource "vsphere_virtual_machine" "vm" {
    name             = var.new_vm_name
    resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
    datastore_id     = data.vsphere_datastore.datastore.id
    folder           = "${var.target_folder}"
    num_cpus         = 8
    num_cores_per_socket = 8
    memory           = 1024
    guest_id         = data.vsphere_virtual_machine.template.guest_id
    scsi_type        = data.vsphere_virtual_machine.template.scsi_type
    network_interface {
      network_id = data.vsphere_network.network.id
      adapter_type = "vmxnet3"
    }
    disk {
      label = "disk0"
      size = data.vsphere_virtual_machine.template.disks.0.size
      thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
      unit_number = 0
    }
    clone {
      template_uuid = data.vsphere_virtual_machine.template.id
    }
    wait_for_guest_net_timeout = 0 # Will not wait for GuestOS Network
}

Then modify \ create another tf with the same vm configuration but create an additional vmdk on the same vm folder & attach it to the vm.

resource "vsphere_virtual_disk" "vmdk1" {
  size               = 20
  type               = "thin"
  vmdk_path          = "/${var.new_vm_name}/${var.new_vm_name}-001.vmdk"
  create_directories = true
  datacenter         = data.vsphere_datacenter.datacenter.name
  datastore          = data.vsphere_datastore.datastore.name
}


resource "vsphere_virtual_machine" "vm" {
    name             = var.new_vm_name
    resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
    datastore_id     = data.vsphere_datastore.datastore.id
    folder           = "${var.target_folder}"
    num_cpus         = 8
    num_cores_per_socket = 8
    memory           = 1024
    guest_id         = data.vsphere_virtual_machine.template.guest_id
    scsi_type        = data.vsphere_virtual_machine.template.scsi_type
    network_interface {
      network_id = data.vsphere_network.network.id
      adapter_type = "vmxnet3"
    }
    disk {
      label = "disk0"
      size = data.vsphere_virtual_machine.template.disks.0.size
      thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
      unit_number = 0
    }
    disk {
      label = "disk1"
      attach = true
      path = vsphere_virtual_disk.vmdk1.vmdk_path
      datastore_id = data.vsphere_datastore.datastore.id
      unit_number = 1
    }
    wait_for_guest_net_timeout = 0 # Will not wait for GuestOS Network
    depends_on = [vsphere_virtual_disk.vmdk1 ]
}