terraform-provider-azurerm: Error updating maintenance_window in AKS using Terraform when computed window start is in the past
Is there an existing issue for this?
- I have searched the existing issues
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.
Terraform Version
1.5.4
AzureRM Provider Version
3.67.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster
Terraform Configuration
Original Configuration:
maintenance_window = {
utc_offset = "+01:00"
maintenance_window_auto_upgrade = {
frequency = "WEEKLY"
interval = 1
duration = 4
day_of_week = "TUESDAY"
start_time = "19:00"
}
maintenance_window_node_os = {
frequency = "WEEKLY"
interval = 1
duration = 4
day_of_week = "TUESDAY"
start_time = "15:00"
}
not_allowed = []
}
Updated Configuration:
maintenance_window = {
utc_offset = "+00:00"
maintenance_window_auto_upgrade = {
frequency = "WEEKLY"
interval = 1
duration = 4
day_of_week = "TUESDAY"
start_time = "09:00"
}
maintenance_window_node_os = {
frequency = "WEEKLY"
interval = 1
duration = 4
day_of_week = "TUESDAY"
start_time = "09:00"
}
not_allowed = []
}
Debug Output/Panic Output
When I apply these changes, I get the following error:
Error: creating/updating Auto Upgrade Schedule Maintenance Configuration for Kubernetes Cluster (Subscription: "redact"
Resource Group Name: "redact-aks-2"
Kubernetes Cluster Name: "redact-aks-2"): maintenanceconfigurations.MaintenanceConfigurationsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidParameter" Message="The input 'maintenanceWindow.startDate' 2023-07-31 00:00:00 +0000 cxTimeZone is before the current time 2023-08-01 09:22:31.836739265 +0000 UTC m=+127068.839937897."
with module.aks.module.cluster.azurerm_kubernetes_cluster.default,
on .terraform/redact.tf line 30, in resource "azurerm_kubernetes_cluster" "default":
30: resource "azurerm_kubernetes_cluster" "default" {
Problem/Expected Behaviour
I am experiencing an issue when attempting to update an existing maintenance_window in Azure Kubernetes Service (AKS) using Terraform version 1.5.4 and the Azure provider (azurerm) version 3.67.0. The issue occurs when running the terraform apply command, specifically when the computed window start time falls in the past.
The AKS cluster is located in a specific region and is running the latest AKS version as of 2023-07-23. I am trying to update the day and time of the maintenance window. Above are the changes I am making.
The key point of this issue is that the logic to calculate the window start date doesn’t seem to consider the current time, thereby allowing a past date to be used where only future dates should be valid. This results in a failure during the apply step.
I would greatly appreciate any insights into why this might be happening.
Extra
When modifying the maintenance window through the az command-line interface, there are no issues and the action is successful. The start date is set for as required.
Deleting the maintenance configuration and re-applying it in the past in addition has no issue
`az aks maintenanceconfiguration update -g redact-aks-2 --cluster-name redact-aks-2 --name aksManagedNodeOSUpgradeSchedule --schedule-type Weekly --day-of-week Tuesday --interval-weeks 1 --start-time 09:00 --duration 4`
Details:
{
"id": "/subscriptions/redact-aks-2/resourceGroups/redact-aks-2/providers/Microsoft.ContainerService/managedClusters/redact-aks-2/maintenanceConfigurations/aksManagedAutoUpgradeSchedule",
"maintenanceWindow": {
"durationHours": 4,
"notAllowedDates": null,
"schedule": {
"absoluteMonthly": null,
"daily": null,
"relativeMonthly": null,
"weekly": {
"dayOfWeek": "Tuesday",
"intervalWeeks": 1
}
},
**"startDate": "2023-08-01",**
"startTime": "09:00",
"utcOffset": "+00:00"
},
"name": "aksManagedNodeOSUpgradeSchedule",
"notAllowedTime": null,
"resourceGroup": "redact-aks-2",
"systemData": null,
"timeInWeek": null,
"type": null
},
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 39
- Comments: 20 (3 by maintainers)
Hi all,
I’ve opened a PR to fix this issue, and here’s a workaround which uses azapi provider to manage the maintenance configs: https://gist.github.com/ms-henglu/df1119f4243f86e25722ab9320c48bfc
Suggestions
Solution 1: The provider could calculate a timestamp if
start_date
is not specified in the hcl and thestart_date
in the state is from the pastSolution 2: It might be sufficient just to add a helpful warning printed to console when this happens or update some of the provider documentation to resolve this issue
@ms-henglu yeah we don’t pass in the start date we just want it to function like the az cli does where if you don’t specify one it just defaults to current date/time.
@Klodjangogo so we set ours to non for the upgradechannel because there is no need for us to update our K8s automagically. We want to do it when we’re ready to do so. For the nodeOsUpgradeChannel we are set to SecurityPatch. So that was my apologies for not thinking about that only being applicable to that particular channel. We are currently patiently waiting for the next securitypatch image to be released. Right now my nodes are on kernel 5.15.0-1049. I had to use the az cli to update some settings for our window because of this issue so I’m hoping to see if my nodes bounce anyday now.