terraform-provider-azurerm: azurerm_storage_data_lake_gen2_filesystem: datalakestore.Client#GetProperties: Failure responding to request: StatusCode=403

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

$ terraform -v
Terraform v0.12.24
+ provider.azurerm v2.7.0
+ provider.random v2.2.1

Affected Resource(s)

  • azurerm_storage_data_lake_gen2_filesystem

Terraform Configuration Files

provider "azurerm" {
  version = "~> 2.7.0"
  features {}
}

provider "random" {
  version = "~> 2.2.0"
}

locals {
  resource_group_name = "rg-dev-test"
  storage_account_name = "devtest"
  location = "australiaeast"
}

resource "random_string" "unique_id" {
  length = 24 - length(local.storage_account_name)
  special = false
  upper = false
}

resource "azurerm_resource_group" "rg" {
  name = local.resource_group_name
  location = local.location
}

resource "azurerm_storage_account" "new_storage_account" {
  name = "${local.storage_account_name}${random_string.unique_id.result}"
  resource_group_name = azurerm_resource_group.rg.name
  location = local.location
  account_tier = "Standard"
  account_replication_type = "LRS"
  account_kind = "StorageV2"
  is_hns_enabled = "true"
  network_rules {
    default_action = "Allow"
  }
}

resource "azurerm_storage_data_lake_gen2_filesystem" "new_data_container" {
  name = "test-one"
  storage_account_id = azurerm_storage_account.new_storage_account.id
}

Debug Output

https://gist.github.com/shadowmint/3bc424a8fb2bba0415bd4ee67dfd8572

Panic Output

N/A

Expected Behavior

It should have worked.

Actual Behavior

Error: Error checking for existence of existing File System "test-one" (Account "devtestp672h8fwgdvcjsv8i"): datalakestore.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: "" error: EOF

  on main.tf line 41, in resource "azurerm_storage_data_lake_gen2_filesystem" "new_data_container":
  41: resource "azurerm_storage_data_lake_gen2_filesystem" "new_data_container" {

Steps to Reproduce

  1. terraform apply

Important Factoids

  • Note the storage account is ‘default_action’ = ‘Allow’, ie. public. This is not a firewall issue.
  • This issue manifests in multiple regions, it does not appear to be a ‘region specific certificate rotation’ issue as in similar issues.
  • The user is not an SP with missing AD credentials, it is has the role ‘Service Administrator | Has full access to all resources in the subscription’.

References

This seems to be similar to this issue that was fixed in 2.1.0 for the azurerm_storage_account resource that was causing the same sort of issue: https://github.com/terraform-providers/terraform-provider-azurerm/pull/6050

It seems plausible from the diff that the fix applied there was not applied to the azurerm_storage_data_lake_gen2_filesystem, as the added tests only refer to the blob container type.

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 77
  • Comments: 42 (4 by maintainers)

Most upvoted comments

@LaurentLesle are you sure you don’t simply have the storage role assigned to your account like njuCZ?

This still doesn’t work, as I previously described, because it’s not using the access token to create the storage container.

out

So… just to absolutely clear: No, this is not resolved in the 2.25.0 provider.

Use terraform they said, Azure RM is a first class citizen! First thing I try…great.

Is there a fix for this? Enabling public access to be able to create a container isn’t really ideal.

Are there any updates to this issue? I still have the problem, running the following:

Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/azurerm v2.42.0

Can confirm that this is still an issue with 2.60.0.

As a workaround I have given myself the “Storage Blob Data Contributor” Role on subscription level. After a while and relogin with az logout and az login it worked. Of course this is not the perfect solution.

I am also still experiencing this behaviour

Terraform v0.14.5
+ provider registry.terraform.io/hashicorp/azurerm v2.44.0

The workaround on this StackOverflow post has helped me, which assigns the Storage Blob Data Owner role – although I had to add the dependency with depends_on on all other resources referencing the azurerm_storage_account resource.

For anyone coming from search engines. This error “datalakestore.Client#GetProperties” with azurerm_storage_data_lake_gen2_filesystem happens when you have firewall enabled on Storage Account.

image

Just add the resolving IP from machine running terraform to exclusion list or, the subnet if inside Azure environment.

In my case its been enabled for all networks. even then getting the same issue.

In my case, was able to resolve this issue after adding Terraform Enterprise Subnet in storage account network rules code

resource “azurerm_storage_account_network_rules” “sa” { resource_group_name = module.resource_group.name storage_account_name = azurerm_storage_account.sa.name default_action = “Deny” bypass = [“AzureServices”] virtual_network_subnet_ids = [module.virtual_network.subnet[“tfe_public”].id] }

@tsukabon, if you read the comment history you’ll see this:

For anyone else who finds this, I recommend you forget about using azurerm_role_assignment, because as per #6934 https://github.com/terraform-providers/terraform-provider-azurerm/issues/6934 there is an arbitrary and indefinite delay between requesting the role and it actually being active.

I’ll also point out, again, that this is a bug.

The azure cli uses the auth token not AD to perform this operation, which is why it works without assigning that role.

Using AD is an option specified at the root level of the provider, and should not be used by default.

If what you have described works for you, thats great! However, be aware that in general it will fail due to timing issues assigning roles.

cheers~

On Sun, 28 Mar 2021 at 11:06 pm, tsukabon @.***> wrote:

@shadowmint https://github.com/shadowmint @joe-plumb https://github.com/joe-plumb @mattew https://github.com/mattew

This problem needs to be solved by setting up a built-in role(Storage Blob Data Contributor). The following is a sample terraform file.

data “azurerm_subscription” “primary” {}

resource “azurerm_role_assignment” “user” { scope = azurerm_storage_account.datalake.id role_definition_name = “Storage Blob Data Contributor” principal_id = data.azurerm_client_config.current.object_id }

resource “azurerm_storage_data_lake_gen2_filesystem” “example” { name = “dl2sample” storage_account_id = azurerm_storage_account.datalake.id depends_on = [azurerm_role_assignment.user] }

By the way, if you want to specify the role_definition_id

resource “azurerm_role_assignment” “user” { scope = azurerm_storage_account.datalake.id role_definition_id = format(“%s/providers/Microsoft.Authorization/roleDefinitions/%s”, data.azurerm_subscription.primary.id, “ba92f5b4-2d11-453d-a403-e96b0029c9fe”) principal_id = data.azurerm_client_config.current.object_id }

https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/terraform-providers/terraform-provider-azurerm/issues/6659#issuecomment-808909478, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACW3MFMONSSJ33XA5OQYH3TF5AZDANCNFSM4MSUQ6AA .

1st solution You can add your machine IP to Firewall and Virtual networks explicitly from where you are executing this terraform script

It can be your local machine or it can be a DevOps self-hosted agent

In my case, my self hosted agent is part of the same virtual network which is allowed in Firewall and Virtual networks it is working perfectly

2nd solution (for POC purpose ) Change your storage account settings

  1. In Networking > Firewall and Virtual networks Allow access from all the network
  2. In configuration > Allow blob public access should be enabled

post this you can check access for Service Principal

@rlevchenko you are correct I have updated my comment

For anyone coming from search engines. This error “datalakestore.Client#GetProperties” with azurerm_storage_data_lake_gen2_filesystem happens when you have firewall enabled on Storage Account.

image

Just add the resolving IP from machine running terraform to exclusion list or, the subnet if inside Azure environment.

@vikascnr it’s not a solution. “disable firewall”… it’s a kind of privilege for some).

Azure team should fix this “IP network rules have no effect on requests originating from the same Azure region as the storage account.” in order to resolve the issue. ('cause the same behavior u have with pipelines in azure devops, for instance. I don’t have any issues when the firewall is disabled though)

Thanks for sharing @shadowmint ! FYI I still see the same issue in azurerm 2.18

You can work around this by explicitly the assigning the role Storage Blob Data Contributor to SP or user on the parent resource group, using azurerm_role_assignment.

However, it’s not clear if this is something wrong in the docs or if this is actually a bug; it seems like a bug, because the storage account access token should have superuser permission to add containers, even when this role is not assigned, and it can be done via the portal / powershell.