terraform-provider-google: Updating Internal regional Application Load Balancer while using NEG as a backend caused error.
Hello,
I have some issues while updating Internal regional Application Load Balancer while using NEG as a backend if endpoints were removed and added back NEG.
Community Note
- Please vote on this issue by adding a π reaction to the original issue to help the community and maintainers prioritize this request.
- Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
- If you are interested in working on this issue or have submitted a pull request, please leave a comment.
- If an issue is assigned to the
modular-magicianuser, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot, a community member has claimed the issue already.
Terraform Version
$ terraform -v
Terraform v1.5.5
Affected Resource(s)
google_compute_region_backend_service
Terraform Configuration Files
data "google_compute_network_endpoint_group" "ilb_network_endpoint_group_zonal" {
count = var.environment == "qa" || var.environment == "test" || var.environment == "dev" ? 1 : 0
name = "name-${var.environment}-jira-neg"
project = local.project_id
zone = "europe-west3-a"
depends_on = [
helm_release.jira
]
}
resource "google_compute_region_health_check" "ilb_health_check_zonal" {
count = var.environment == "qa" || var.environment == "test" || var.environment == "dev" ? 1 : 0
name = "name-${var.environment}-ilb-health-check"
project = local.project_id
region = local.region
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 2
unhealthy_threshold = 2
http_health_check {
port = "8080"
request_path = "/status"
port_specification = "USE_FIXED_PORT"
}
}
resource "google_compute_region_backend_service" "ilb_backend_service_zonal" {
count = var.environment == "dev" || var.environment == "test" || var.environment == "qa" ? 1 : 0
name = "name-${var.environment}-ilb-backend-service"
project = local.project_id
region = local.region
health_checks = [google_compute_region_health_check.ilb_health_check_zonal[0].id]
protocol = "HTTP"
load_balancing_scheme = "INTERNAL_MANAGED"
enable_cdn = false
session_affinity = "GENERATED_COOKIE"
locality_lb_policy = "RING_HASH"
timeout_sec = 300
backend {
group = data.google_compute_network_endpoint_group.ilb_network_endpoint_group_zonal[0].id
balancing_mode = "RATE"
max_rate_per_endpoint = 1000
capacity_scaler = 1.0
}
consistent_hash {
minimum_ring_size = 1024
}
}
resource "google_compute_region_url_map" "ilb_url_map_zonal" {
count = var.environment == "dev" || var.environment == "test" || var.environment == "qa" ? 1 : 0
name = "name-${var.environment}-ilb-url-map"
project = local.project_id
region = local.region
default_service = google_compute_region_backend_service.ilb_backend_service_zonal[0].id
}
resource "google_compute_region_target_http_proxy" "ilb_target_http_proxy_zonal" {
count = var.environment == "dev" || var.environment == "test" || var.environment == "qa" ? 1 : 0
name = "name-${var.environment}-ilb-https-proxy"
project = local.project_id
region = local.region
url_map = google_compute_region_url_map.ilb_url_map_zonal[0].id
}
resource "google_compute_forwarding_rule" "ilb_global_forwarding_rule_zonal" {
count = var.environment == "dev" || var.environment == "test" || var.environment == "qa" ? 1 : 0
name = "name-${var.environment}-ilb-global-forwarding-rule"
project = local.project_id
region = local.region
network = data.google_compute_network.network.self_link
subnetwork = data.google_compute_subnetwork.subnet.self_link
ip_protocol = "TCP"
load_balancing_scheme = "INTERNAL_MANAGED"
port_range = "80"
target = google_compute_region_target_http_proxy.ilb_target_http_proxy_zonal[0].self_link
ip_address = data.google_compute_address.nginx_ingress_ip.address
}
Expected Behavior
Internal regional Application Load Balancer should be updated.
Actual Behavior
Thereβs an error message:
Error: Provider produced inconsistent final plan
When expanding the plan for
google_compute_region_backend_service.ilb_backend_service_zonal[0] to
include new values learned so far during apply, provider
"registry.terraform.io/hashicorp/google" produced an invalid new value for
.backend: planned set element
cty.ObjectVal(map[string]cty.Value{"balancing_mode":cty.StringVal("RATE"),
"capacity_scaler":cty.NumberIntVal(1), "description":cty.StringVal(""),
"failover":cty.UnknownVal(cty.Bool), "group":cty.UnknownVal(cty.String),
"max_connections":cty.NullVal(cty.Number),
"max_connections_per_endpoint":cty.NullVal(cty.Number),
"max_connections_per_instance":cty.NullVal(cty.Number),
"max_rate":cty.NullVal(cty.Number),
"max_rate_per_endpoint":cty.NumberIntVal(1000),
"max_rate_per_instance":cty.NullVal(cty.Number),
"max_utilization":cty.NullVal(cty.Number)}) does not correlate with any
element in actual.
This is a bug in the provider, which should be reported in the provider's
own issue tracker.
Thereβs no issue if run terraform apply again
Steps to Reproduce
- Create GKE service with NEG using annotations:
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "name-${environment}-jira-neg"}}}'
- Use Terraform code above to deploy Internal regional Application Load Balancer while using NEG as a backend.
- Update app parameters to trigger adding and removing endpoints to NEG and run
terraform apply
Second execution of terraform apply shows no issues.
References
https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg https://cloud.google.com/load-balancing/docs/l7-internal
UPDATE
GCP Support confirmed that ILB is deployed fine and all commands that got triggered completed without errors for each build.
About this issue
- Original URL
- State: open
- Created 10 months ago
- Reactions: 2
- Comments: 19
In addition to that, I conducted one more experiment. I changed the configuration as below:
and it works like a charm as well.
Meanwhile, when I tried to get back to data as below (which is quite the same):
I got an error message:
I hope that this will be helpful in your investigation.
@edwardmedia @shuyama1 I think I was able to find a work around.
Iβve changed the code as below:
and it works like a charm:
After that, I checked tfstate and found that thereβs no difference:
Meanwhile, when I change it back as below:
the issue was back as well: