terraform-provider-vsphere: clone from template - network_interface.0: ServerFaultCode: The object or item referred to could not be found
Terraform Version
Terraform v0.11.1
- provider.vsphere v1.1.0
Affected Resource(s)
Please list the resources as a list, for example:
vsphere_virtual_machine
Terraform Configuration Files
(Lots of trimming here.) The called module
...
data "vsphere_network" "target-network" {
name = "${var.network_label_designator}"
datacenter_id = "${data.vsphere_datacenter.target-datacenter.id}"
}
...
resource "vsphere_virtual_machine" "generic" {
name = "${var.name_prefix}${format("%02d", count.index + var.name_starting_val)}"
resource_pool_id = "${data.vsphere_resource_pool.target-resource-pool.id}"
datastore_id = "${data.vsphere_datastore.target-datastore.id}"
...
network_interface {
network_id = "${data.vsphere_network.target-network.id}"
# Comment out adapter_type during troubleshooting. VMXNET3 end to end
# adapter_type = "${data.vsphere_virtual_machine.source-template.network_interface_types[0]}"
}
...
clone {
template_uuid = "${data.vsphere_virtual_machine.source-template.id}"
# Timout in minutes. Upped from default 30. (Docs say 10). Clone was taking longer than that and upon fail leaving stubs.
timeout = "60"
customize {
network_interface {}
linux_options {
host_name = "${var.name_prefix}${format("%02d", count.index + var.name_starting_val)}"
domain = "${var.dns_domain}"
}
}
}
Successful Plan Output
[terraform@tf-dev01 jenkins-agents-batch1]$ terraform plan -var-file="jenkins-agents.tfvars" -out=begin-refactor-09
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.vsphere_datacenter.target-datacenter: Refreshing state...
data.vsphere_virtual_machine.source-template: Refreshing state...
data.vsphere_network.target-network: Refreshing state...
data.vsphere_datastore.target-datastore: Refreshing state...
data.vsphere_resource_pool.target-resource-pool: Refreshing state...
data.vsphere_datacenter.target-datacenter: Refreshing state...
data.vsphere_network.target-network: Refreshing state...
data.vsphere_datastore.target-datastore: Refreshing state...
data.vsphere_virtual_machine.source-template: Refreshing state...
data.vsphere_resource_pool.target-resource-pool: Refreshing state...
data.vsphere_datacenter.target-datacenter: Refreshing state...
data.vsphere_datastore.target-datastore: Refreshing state...
data.vsphere_network.target-network: Refreshing state...
data.vsphere_virtual_machine.source-template: Refreshing state...
data.vsphere_resource_pool.target-resource-pool: Refreshing state...
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ module.linux-build.vsphere_virtual_machine.generic
id: <computed>
boot_retry_delay: "10000"
change_version: <computed>
clone.#: "1"
clone.0.customize.#: "1"
clone.0.customize.0.linux_options.#: "1"
clone.0.customize.0.linux_options.0.domain: "mydomain.redacted.com"
clone.0.customize.0.linux_options.0.host_name: "cent69-build04"
clone.0.customize.0.linux_options.0.hw_clock_utc: "true"
clone.0.customize.0.network_interface.#: "1"
clone.0.customize.0.timeout: "10"
clone.0.template_uuid: "422647b0-70c7-8237-1e36-b40344fb3e2d"
clone.0.timeout: "60"
cpu_limit: "-1"
cpu_share_count: <computed>
cpu_share_level: "normal"
datastore_id: "datastore-74"
default_ip_address: <computed>
disk.#: "1"
disk.0.attach: "false"
disk.0.device_address: <computed>
disk.0.disk_mode: "persistent"
disk.0.disk_sharing: "sharingNone"
disk.0.eagerly_scrub: "false"
disk.0.io_limit: "-1"
disk.0.io_reservation: "0"
disk.0.io_share_count: "0"
disk.0.io_share_level: "normal"
disk.0.keep_on_remove: "false"
disk.0.key: "0"
disk.0.name: "cent69-build04.vmdk"
disk.0.size: "250"
disk.0.thin_provisioned: "true"
disk.0.unit_number: "0"
disk.0.write_through: "false"
ept_rvi_mode: "automatic"
firmware: "bios"
folder: "FunctionalSlaves"
force_power_off: "true"
guest_id: "centos64Guest"
guest_ip_addresses.#: <computed>
host_system_id: <computed>
hv_mode: "hvAuto"
imported: <computed>
memory: "1024"
memory_limit: "-1"
memory_share_count: <computed>
memory_share_level: "normal"
migrate_wait_timeout: "30"
name: "cent69-build04"
network_interface.#: "1"
network_interface.0.adapter_type: "vmxnet3"
network_interface.0.bandwidth_limit: "-1"
network_interface.0.bandwidth_reservation: "0"
network_interface.0.bandwidth_share_count: <computed>
network_interface.0.bandwidth_share_level: "normal"
network_interface.0.device_address: <computed>
network_interface.0.key: <computed>
network_interface.0.mac_address: <computed>
network_interface.0.network_id: "dvportgroup-277"
num_cores_per_socket: "5"
num_cpus: "10"
reboot_required: <computed>
resource_pool_id: "resgroup-61"
run_tools_scripts_after_power_on: "true"
run_tools_scripts_after_resume: "true"
run_tools_scripts_before_guest_shutdown: "true"
run_tools_scripts_before_guest_standby: "true"
scsi_controller_count: "1"
scsi_type: "pvscsi"
shutdown_wait_timeout: "3"
swap_placement_policy: "inherit"
uuid: <computed>
vmware_tools_status: <computed>
vmx_path: <computed>
wait_for_guest_net_timeout: "5"
+ module.linux-test.vsphere_virtual_machine.generic
id: <computed>
boot_retry_delay: "10000"
change_version: <computed>
clone.#: "1"
clone.0.customize.#: "1"
clone.0.customize.0.linux_options.#: "1"
clone.0.customize.0.linux_options.0.domain: "mydomain.redacted.com"
clone.0.customize.0.linux_options.0.host_name: "cent69-da36"
clone.0.customize.0.linux_options.0.hw_clock_utc: "true"
clone.0.customize.0.network_interface.#: "1"
clone.0.customize.0.timeout: "10"
clone.0.template_uuid: "422647b0-70c7-8237-1e36-b40344fb3e2d"
clone.0.timeout: "60"
cpu_limit: "-1"
cpu_share_count: <computed>
cpu_share_level: "normal"
datastore_id: "datastore-74"
default_ip_address: <computed>
disk.#: "1"
disk.0.attach: "false"
disk.0.device_address: <computed>
disk.0.disk_mode: "persistent"
disk.0.disk_sharing: "sharingNone"
disk.0.eagerly_scrub: "false"
disk.0.io_limit: "-1"
disk.0.io_reservation: "0"
disk.0.io_share_count: "0"
disk.0.io_share_level: "normal"
disk.0.keep_on_remove: "false"
disk.0.key: "0"
disk.0.name: "cent69-da36.vmdk"
disk.0.size: "250"
disk.0.thin_provisioned: "true"
disk.0.unit_number: "0"
disk.0.write_through: "false"
ept_rvi_mode: "automatic"
firmware: "bios"
folder: "FunctionalSlaves"
force_power_off: "true"
guest_id: "centos64Guest"
guest_ip_addresses.#: <computed>
host_system_id: <computed>
hv_mode: "hvAuto"
imported: <computed>
memory: "1024"
memory_limit: "-1"
memory_share_count: <computed>
memory_share_level: "normal"
migrate_wait_timeout: "30"
name: "cent69-da36"
network_interface.#: "1"
network_interface.0.adapter_type: "vmxnet3"
network_interface.0.bandwidth_limit: "-1"
network_interface.0.bandwidth_reservation: "0"
network_interface.0.bandwidth_share_count: <computed>
network_interface.0.bandwidth_share_level: "normal"
network_interface.0.device_address: <computed>
network_interface.0.key: <computed>
network_interface.0.mac_address: <computed>
network_interface.0.network_id: "dvportgroup-277"
num_cores_per_socket: "2"
num_cpus: "4"
reboot_required: <computed>
resource_pool_id: "resgroup-61"
run_tools_scripts_after_power_on: "true"
run_tools_scripts_after_resume: "true"
run_tools_scripts_before_guest_shutdown: "true"
run_tools_scripts_before_guest_standby: "true"
scsi_controller_count: "1"
scsi_type: "pvscsi"
shutdown_wait_timeout: "3"
swap_placement_policy: "inherit"
uuid: <computed>
vmware_tools_status: <computed>
vmx_path: <computed>
wait_for_guest_net_timeout: "5"
+ module.utility-agents.vsphere_virtual_machine.generic
id: <computed>
boot_retry_delay: "10000"
change_version: <computed>
clone.#: "1"
clone.0.customize.#: "1"
clone.0.customize.0.linux_options.#: "1"
clone.0.customize.0.linux_options.0.domain: "mydomain.redacted.com"
clone.0.customize.0.linux_options.0.host_name: "dev-util01"
clone.0.customize.0.linux_options.0.hw_clock_utc: "true"
clone.0.customize.0.network_interface.#: "1"
clone.0.customize.0.timeout: "10"
clone.0.template_uuid: "422647b0-70c7-8237-1e36-b40344fb3e2d"
clone.0.timeout: "60"
cpu_limit: "-1"
cpu_share_count: <computed>
cpu_share_level: "normal"
datastore_id: "datastore-74"
default_ip_address: <computed>
disk.#: "1"
disk.0.attach: "false"
disk.0.device_address: <computed>
disk.0.disk_mode: "persistent"
disk.0.disk_sharing: "sharingNone"
disk.0.eagerly_scrub: "false"
disk.0.io_limit: "-1"
disk.0.io_reservation: "0"
disk.0.io_share_count: "0"
disk.0.io_share_level: "normal"
disk.0.keep_on_remove: "false"
disk.0.key: "0"
disk.0.name: "dev-util01.vmdk"
disk.0.size: "250"
disk.0.thin_provisioned: "true"
disk.0.unit_number: "0"
disk.0.write_through: "false"
ept_rvi_mode: "automatic"
firmware: "bios"
folder: "FunctionalSlaves"
force_power_off: "true"
guest_id: "centos64Guest"
guest_ip_addresses.#: <computed>
host_system_id: <computed>
hv_mode: "hvAuto"
imported: <computed>
memory: "1024"
memory_limit: "-1"
memory_share_count: <computed>
memory_share_level: "normal"
migrate_wait_timeout: "30"
name: "dev-util01"
network_interface.#: "1"
network_interface.0.adapter_type: "vmxnet3"
network_interface.0.bandwidth_limit: "-1"
network_interface.0.bandwidth_reservation: "0"
network_interface.0.bandwidth_share_count: <computed>
network_interface.0.bandwidth_share_level: "normal"
network_interface.0.device_address: <computed>
network_interface.0.key: <computed>
network_interface.0.mac_address: <computed>
network_interface.0.network_id: "dvportgroup-277"
num_cores_per_socket: "1"
num_cpus: "1"
reboot_required: <computed>
resource_pool_id: "resgroup-61"
run_tools_scripts_after_power_on: "true"
run_tools_scripts_after_resume: "true"
run_tools_scripts_before_guest_shutdown: "true"
run_tools_scripts_before_guest_standby: "true"
scsi_controller_count: "1"
scsi_type: "pvscsi"
shutdown_wait_timeout: "3"
swap_placement_policy: "inherit"
uuid: <computed>
vmware_tools_status: <computed>
vmx_path: <computed>
wait_for_guest_net_timeout: "5"
Plan: 3 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: begin-refactor-09
To perform exactly these actions, run the following command to apply:
terraform apply "begin-refactor-09"
Errors on apply
module.linux-test.vsphere_virtual_machine.generic: Still creating... (31m20s elapsed)
module.utility-agents.vsphere_virtual_machine.generic: Still creating... (31m20s elapsed)
module.linux-build.vsphere_virtual_machine.generic: Still creating... (31m20s elapsed)
Error: Error applying plan:
3 error(s) occurred:
* module.linux-test.vsphere_virtual_machine.generic: 1 error(s) occurred:
* vsphere_virtual_machine.generic: network_interface.0: ServerFaultCode: The object or item referred to could not be found.
* module.linux-build.vsphere_virtual_machine.generic: 1 error(s) occurred:
* vsphere_virtual_machine.generic: network_interface.0: ServerFaultCode: The object or item referred to could not be found.
* module.utility-agents.vsphere_virtual_machine.generic: 1 error(s) occurred:
* vsphere_virtual_machine.generic: network_interface.0: ServerFaultCode: The object or item referred to could not be found.
Terraform does not automatically rollback in the face of errors.
...
State after failed apply
[terraform@tf-dev01 jenkins-agents-batch1]$ terraform state list
module.linux-build.vsphere_datacenter.target-datacenter
module.linux-build.vsphere_datastore.target-datastore
module.linux-build.vsphere_network.target-network
module.linux-build.vsphere_resource_pool.target-resource-pool
module.linux-build.vsphere_virtual_machine.generic
module.linux-build.vsphere_virtual_machine.source-template
module.linux-test.vsphere_datacenter.target-datacenter
module.linux-test.vsphere_datastore.target-datastore
module.linux-test.vsphere_network.target-network
module.linux-test.vsphere_resource_pool.target-resource-pool
module.linux-test.vsphere_virtual_machine.generic
module.linux-test.vsphere_virtual_machine.source-template
module.utility-agents.vsphere_datacenter.target-datacenter
module.utility-agents.vsphere_datastore.target-datastore
module.utility-agents.vsphere_network.target-network
module.utility-agents.vsphere_resource_pool.target-resource-pool
module.utility-agents.vsphere_virtual_machine.generic
module.utility-agents.vsphere_virtual_machine.source-template
And I am left with 3 VM “stubs” in vSphere. They are seemingly successful clones, but are powered off and unconfigured. Their CPU and memory counts match the template but not the TF code.
Error on destroy
[terraform@tf-dev01 jenkins-agents-batch1]$ terraform destroy -var-file="jenkins-agents.tfvars"
data.vsphere_datacenter.target-datacenter: Refreshing state...
data.vsphere_virtual_machine.source-template: Refreshing state...
data.vsphere_resource_pool.target-resource-pool: Refreshing state...
data.vsphere_network.target-network: Refreshing state...
data.vsphere_datastore.target-datastore: Refreshing state...
data.vsphere_datacenter.target-datacenter: Refreshing state...
vsphere_virtual_machine.generic: Refreshing state... (ID: 422ffd37-72b7-bf04-ba07-6f0991e64f48)
data.vsphere_virtual_machine.source-template: Refreshing state...
data.vsphere_datastore.target-datastore: Refreshing state...
data.vsphere_resource_pool.target-resource-pool: Refreshing state...
data.vsphere_network.target-network: Refreshing state...
data.vsphere_datacenter.target-datacenter: Refreshing state...
vsphere_virtual_machine.generic: Refreshing state... (ID: 422f52a2-1d72-3209-35ec-052e906ebd01)
data.vsphere_datastore.target-datastore: Refreshing state...
data.vsphere_resource_pool.target-resource-pool: Refreshing state...
data.vsphere_network.target-network: Refreshing state...
data.vsphere_virtual_machine.source-template: Refreshing state...
vsphere_virtual_machine.generic: Refreshing state... (ID: 422f3001-a588-c417-4804-05f83f60f04e)
Error: Error refreshing state: 3 error(s) occurred:
* module.utility-agents.vsphere_virtual_machine.generic: 1 error(s) occurred:
* module.utility-agents.vsphere_virtual_machine.generic: vsphere_virtual_machine.generic: network_interface.0: cannot find network device: invalid ID ""
* module.linux-test.vsphere_virtual_machine.generic: 1 error(s) occurred:
* module.linux-test.vsphere_virtual_machine.generic: vsphere_virtual_machine.generic: network_interface.0: cannot find network device: invalid ID ""
* module.linux-build.vsphere_virtual_machine.generic: 1 error(s) occurred:
* module.linux-build.vsphere_virtual_machine.generic: vsphere_virtual_machine.generic: network_interface.0: cannot find network device: invalid ID ""
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform planterraform applyterraform destroy
Important Factoids
To clean up the “stub” VMs I tried taint but didn’t get anywhere.
I ended up with
for state in $(terraform state list); do terraform state rm $state; done
and manually deleting.
My failure to generate debug logs
[terraform@tf-dev01 jenkins-agents-batch1]$ echo $TF_LOG
DEBUG
[terraform@tf-dev01 jenkins-agents-batch1]$ echo $TF_LOG_PATH
/tmp/tflog/terraform.log
[terraform@tf-dev01 jenkins-agents-batch1]$ echo $VSPHERE_CLIENT_DEBUG
1
But I’m not getting any logs. I’m not sure where to look for the vsphere client logs. And /tmp/tflog/terraform.log is empty.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 3
- Comments: 24 (6 by maintainers)
OK - yeah I think the issue here is we need to allow cloning with NICs that don’t have networks assigned.
I think this might be safe to do across the board - while we require it in config, I don’t think it’s a dangerous scenario if we skip reading the network if it doesn’t exist. We could possibly even exclude it from updates if it’s absent, but that would need checking on.
same problem here. I have two clusters one prod using ent license other one dev using standard license. template is created in prod. before trying to use terraform for dev cluster, had no problem creating dev clones via vmware ui, which is managing both clusters. currently will create another template for solely for dev cluster, as template without network-if is not an option atm. Hopefully there would be some solution for terraform to apply correct network which I indicate in terraform manifest, without erroring about absence of “old” network embedded in the template.
@jason-azze can you check your template’s NIC and see if it has a valid network visible to where it is currently located?
If it it doesn’t, can you set one, and try the clone again?
Thanks!