terraform-provider-azurerm: azurerm_storage_data_lake_gen2_filesystem: datalakestore.Client#GetProperties: Failure responding to request: StatusCode=403
Community Note
- Please vote on this issue by adding a đ reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave â+1â or âme tooâ comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform (and AzureRM Provider) Version
$ terraform -v
Terraform v0.12.24
+ provider.azurerm v2.7.0
+ provider.random v2.2.1
Affected Resource(s)
azurerm_storage_data_lake_gen2_filesystem
Terraform Configuration Files
provider "azurerm" {
version = "~> 2.7.0"
features {}
}
provider "random" {
version = "~> 2.2.0"
}
locals {
resource_group_name = "rg-dev-test"
storage_account_name = "devtest"
location = "australiaeast"
}
resource "random_string" "unique_id" {
length = 24 - length(local.storage_account_name)
special = false
upper = false
}
resource "azurerm_resource_group" "rg" {
name = local.resource_group_name
location = local.location
}
resource "azurerm_storage_account" "new_storage_account" {
name = "${local.storage_account_name}${random_string.unique_id.result}"
resource_group_name = azurerm_resource_group.rg.name
location = local.location
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
network_rules {
default_action = "Allow"
}
}
resource "azurerm_storage_data_lake_gen2_filesystem" "new_data_container" {
name = "test-one"
storage_account_id = azurerm_storage_account.new_storage_account.id
}
Debug Output
https://gist.github.com/shadowmint/3bc424a8fb2bba0415bd4ee67dfd8572
Panic Output
N/A
Expected Behavior
It should have worked.
Actual Behavior
Error: Error checking for existence of existing File System "test-one" (Account "devtestp672h8fwgdvcjsv8i"): datalakestore.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: "" error: EOF
on main.tf line 41, in resource "azurerm_storage_data_lake_gen2_filesystem" "new_data_container":
41: resource "azurerm_storage_data_lake_gen2_filesystem" "new_data_container" {
Steps to Reproduce
terraform apply
Important Factoids
- Note the storage account is âdefault_actionâ = âAllowâ, ie. public. This is not a firewall issue.
- This issue manifests in multiple regions, it does not appear to be a âregion specific certificate rotationâ issue as in similar issues.
- The user is not an SP with missing AD credentials, it is has the role âService Administrator | Has full access to all resources in the subscriptionâ.
References
This seems to be similar to this issue that was fixed in 2.1.0 for the azurerm_storage_account resource that was causing the same sort of issue: https://github.com/terraform-providers/terraform-provider-azurerm/pull/6050
It seems plausible from the diff that the fix applied there was not applied to the azurerm_storage_data_lake_gen2_filesystem, as the added tests only refer to the blob container type.
About this issue
- Original URL
- State: open
- Created 4 years ago
- Reactions: 77
- Comments: 42 (4 by maintainers)
@LaurentLesle are you sure you donât simply have the storage role assigned to your account like njuCZ?
This still doesnât work, as I previously described, because itâs not using the access token to create the storage container.
So⌠just to absolutely clear: No, this is not resolved in the 2.25.0 provider.
Use terraform they said, Azure RM is a first class citizen! First thing I tryâŚgreat.
Is there a fix for this? Enabling public access to be able to create a container isnât really ideal.
Are there any updates to this issue? I still have the problem, running the following:
Can confirm that this is still an issue with 2.60.0.
As a workaround I have given myself the âStorage Blob Data Contributorâ Role on subscription level. After a while and relogin with
az logoutandaz loginit worked. Of course this is not the perfect solution.I am also still experiencing this behaviour
The workaround on this StackOverflow post has helped me, which assigns the
Storage Blob Data Ownerrole â although I had to add the dependency withdepends_onon all other resources referencing theazurerm_storage_accountresource.In my case its been enabled for all networks. even then getting the same issue.
In my case, was able to resolve this issue after adding Terraform Enterprise Subnet in storage account network rules code
resource âazurerm_storage_account_network_rulesâ âsaâ { resource_group_name = module.resource_group.name storage_account_name = azurerm_storage_account.sa.name default_action = âDenyâ bypass = [âAzureServicesâ] virtual_network_subnet_ids = [module.virtual_network.subnet[âtfe_publicâ].id] }
@tsukabon, if you read the comment history youâll see this:
Iâll also point out, again, that this is a bug.
The azure cli uses the auth token not AD to perform this operation, which is why it works without assigning that role.
Using AD is an option specified at the root level of the provider, and should not be used by default.
If what you have described works for you, thats great! However, be aware that in general it will fail due to timing issues assigning roles.
cheers~
On Sun, 28 Mar 2021 at 11:06 pm, tsukabon @.***> wrote:
1st solution You can add your machine IP to Firewall and Virtual networks explicitly from where you are executing this terraform script
It can be your local machine or it can be a DevOps self-hosted agent
In my case, my self hosted agent is part of the same virtual network which is allowed in Firewall and Virtual networks it is working perfectly
2nd solution (for POC purpose ) Change your storage account settings
post this you can check access for Service Principal
@rlevchenko you are correct I have updated my comment
For anyone coming from search engines. This error âdatalakestore.Client#GetPropertiesâ with azurerm_storage_data_lake_gen2_filesystem happens when you have firewall enabled on Storage Account.
Just add the resolving IP from machine running terraform to exclusion list or, the subnet if inside Azure environment.
@vikascnr itâs not a solution. âdisable firewallâ⌠itâs a kind of privilege for some).
Azure team should fix this âIP network rules have no effect on requests originating from the same Azure region as the storage account.â in order to resolve the issue. ('cause the same behavior u have with pipelines in azure devops, for instance. I donât have any issues when the firewall is disabled though)
Thanks for sharing @shadowmint ! FYI I still see the same issue in azurerm 2.18
You can work around this by explicitly the assigning the role
Storage Blob Data Contributorto SP or user on the parent resource group, usingazurerm_role_assignment.However, itâs not clear if this is something wrong in the docs or if this is actually a bug; it seems like a bug, because the storage account access token should have superuser permission to add containers, even when this role is not assigned, and it can be done via the portal / powershell.