terraform-provider-google: plugin.terraform-provider-google_v2.11.0_x4: panic: runtime error: invalid memory address or nil pointer dereference

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the “modular-magician” user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to “hashibot”, a community member has claimed the issue already.

Terraform Version

Terraform v0.12.16

  • provider.google v2.11.0
  • provider.google-beta v3.0.0-beta.1

Affected Resource(s)

  • google_v2.11.0_x4

Terraform Configuration Files

It appears to have something to do with this module - if my root module is no longer referencing this one, the crash goes away. Creating two managed instance groups with the same template should be fine, right?:

resource "google_compute_instance_template" "default" {
  name_prefix = "${var.project_appname}-${var.target_environment}-instance-"
  description = "This template is used to create app server instances in a managed instance group. Managed by Terraform."

  tags = ["ssl", "http"]
  labels = {
    environment = var.target_environment
  }

  instance_description = "${var.project_appname}-${var.target_environment} instance. Managed by Terraform."
  machine_type         = "n1-standard-1"
  project              = var.google_project_name
  region               = var.google_region

  scheduling {
    automatic_restart   = true
    on_host_maintenance = "MIGRATE"
  }

  // Create a new boot disk from an image
  disk {
    source_image = var.img_link
    auto_delete  = true
    boot         = true
  }

  network_interface {
    network = "default"
    access_config {}
  }
  # TODO: not sure if these env vars are useful
  metadata_startup_script = "export APP=${var.project_appname}\nexport REPO=${var.project_repository}"
}

resource "google_compute_instance_group_manager" "webservers_backend" {
  provider    = google-beta
  name        = "${var.project_appname}-${var.target_environment}-backend"
  description = "Instance group, backend servers. Managed by Terraform."

  base_instance_name = "${var.project_appname}-${var.target_environment}-backend"
  zone               = var.google_zone

  version {
    name              = "app_instance_group"
    instance_template = google_compute_instance_template.default.self_link
  }


  target_size = 1

  named_port {
    name = "http"
    port = "8080"
  }


  auto_healing_policies {
    health_check      = google_compute_health_check.autohealing.self_link
    initial_delay_sec = 300
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "google_compute_instance_group_manager" "webservers_frontend" {
  provider    = google-beta
  name        = "${var.project_appname}-${var.target_environment}-frontend"
  description = "Instance group, frontend servers. Managed by Terraform."

  base_instance_name = "${var.project_appname}-${var.target_environment}-frontend"
  zone               = var.google_zone

  version {
    name              = "app_instance_group"
    instance_template = google_compute_instance_template.default.self_link
  }

  named_port {
    name = "http"
    port = "8080"
  }

  auto_healing_policies {
    health_check      = google_compute_health_check.autohealing.self_link
    initial_delay_sec = 300
  }

  lifecycle {
    create_before_destroy = true
  }
}

# some infrastructure-y things: health check, autoscaler

resource "google_compute_health_check" "autohealing" {
  provider            = google-beta
  name                = "${var.project_appname}-${var.target_environment}-autohealing-health-check"
  check_interval_sec  = 15
  timeout_sec         = 10
  healthy_threshold   = 2
  unhealthy_threshold = 10 # 50 seconds

  http_health_check {
    request_path = "/gcp_healtcheck"
    port         = "8080"
  }
}

resource "google_compute_autoscaler" "default" {
  provider = google-beta

  name   = "${var.project_appname}-${var.target_environment}-frontend-autoscaler"
  zone   = var.google_zone
  target = google_compute_instance_group_manager.webservers_frontend.self_link

  autoscaling_policy {
    max_replicas    = 5
    min_replicas    = 1
    cooldown_period = 60
  }
}


Debug Output

Console output: https://gist.github.com/hallvors/c41d3ca7bcc19fd1090f993ae25ee01a

Panic Output

https://gist.github.com/hallvors/95553bc0ac2cca81eae2f03f88d25262

Expected Behavior

No crash, completing setting up resources

Actual Behavior

Crashes consistently on apply and plan when the job is nearly finished. Deleting the state (and generated resources) makes plan work again AFAIK.

Steps to Reproduce

I mostly just run

  1. terraform apply

with lots of -var statements. Backend is remote on GCP.

Important Factoids

The config uses one plain google provider, one google-beta and one google with an alias (to use a different service account auth file).

References

None

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 2
  • Comments: 19 (1 by maintainers)

Most upvoted comments

Thanks for being so cooperative @hallvors ! Just wanted to provide an update: I’ve been able to repro the crash by while limiting the scope down to just the servers and diskimage modules (servers didn’t seem to do it by itself). While a bit busy at the moment, I should be able to look for the root cause in the next couple of days. Thanks for your patience.