terraform-provider-azurerm: Cannot create azurerm_storage_container in azurerm_storage_account that uses network_rules

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.11.11

  • provider.azurerm v1.21.0

Affected Resource(s)

  • azurerm_storage_account
  • azurerm_storage_container

Terraform Configuration Files

resource "azurerm_storage_account" "test-storage-acct" {
  name                     = "${var.prefix}storacct"
  resource_group_name      = "${var.resgroup}"
  location                 = "${var.location}"
  account_tier             = "Standard"
  account_replication_type = "LRS"
  network_rules {
    ip_rules                   = ["aaa.bbb.ccc.ddd/ee"]
    virtual_network_subnet_ids = ["${var.subnetid}"]
  }
}
resource "azurerm_storage_container" "provisioning" {
  name                  = "${var.prefix}-provisioning"
  resource_group_name   = "${var.resgroup}"
  storage_account_name  = "${azurerm_storage_account.test-storage-acct.name}"
  container_access_type = "private"
}

Debug Output

  • azurerm_storage_container.provisioning: Error creating container “philtesting1-provisioning” in storage account “philtesting1storacct”: storage: service returned error: StatusCode=403, ErrorCode=AuthorizationFailure, ErrorMessage=This request is not authorized to perform this operation. RequestId:a7f9d2e1-701e-00b3-4e74-cf3b34000000 Time:2019-02-28T14:45:53.7885750Z, RequestInitiated=Thu, 28 Feb 2019 14:45:53 GMT, RequestId=a7f9d2e1-701e-00b3-4e74-cf3b34000000, API Version=, QueryParameterName=, QueryParameterValue=

Expected Behavior

Container can be created in a storage account that uses network rules

Actual Behavior

After applying a network_rule to a storage account I cannot provision a container into it. My public IP is included in the address range specified in the network rule. I can successfully create the container via the Azure portal

Steps to Reproduce

  1. terraform apply

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 338
  • Comments: 94 (23 by maintainers)

Most upvoted comments

We just ran into this ourselves. Nice to see someone else has already raised the issue with excellent documentation.

The workaround we are testing is to call out to an ARM template for creating the containers. This is not ideal for several reasons:

  1. It’s not Terraform-native
  2. It’s more moving parts and more complicated to manage
  3. ARM templates only apply once, so if the configuration drifts over time Terraform will not set it back

But it’s what we’ve got. This could be a workaround for you if you need this.

I’m using two parts - a JSON file with the ARM, and a Terraform azurerm_template_deployment

storage-containers.json

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "storageAccountName": {
            "type": "string"
        },
        "location": {
            "type": "string"
        }
    },
    "resources": [
        {
            "name": "[parameters('storageAccountName')]",
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2018-07-01",
            "location": "[parameters('location')]",
            "resources": [
                {
                    "name": "default/images",
                    "type": "blobServices/containers",
                    "apiVersion": "2018-07-01",
                    "dependsOn": [
                        "[parameters('storageAccountName')]"
                    ]
                },
                {
                    "name": "default/backups",
                    "type": "blobServices/containers",
                    "apiVersion": "2018-07-01",
                    "dependsOn": [
                        "[parameters('storageAccountName')]"
                    ]
                }
            ]
        }
    ]
}

main.tf

resource "azurerm_storage_account" "standard-storage" {
  name                = "stdstorage"
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  account_tier              = "Standard"
  account_replication_type  = "${var.standard_replication_type}"
  enable_blob_encryption    = "${var.standard_enable_blob_encryption}"
  enable_https_traffic_only = true

  network_rules {
    ip_rules                   = "${var.firewall_allow_ips}"
    virtual_network_subnet_ids = ["${var.vm_subnet_id}"]
  }
}

resource "azurerm_template_deployment" "stdstorage-containers" {
  name                = "stdstorage-containers"
  resource_group_name = "${var.resource_group_name}"
  deployment_mode     = "Incremental"

  depends_on = [
    "azurerm_storage_account.standard-storage",
  ]

  parameters {
    location           = "${var.location}"
    storageAccountName = "${azurerm_storage_account.standard-storage.name}"
  }

  template_body = "${file("${path.module}/storage-containers.json")}"
}

Just reading the issue from top to the bottom about storage account configuration and i’m not sure if TF is the right place for this challenge (i would not call it issue) as TF have no possibility to change the way how Azure services works.

It all goes down to how Azure Storage Account works and it’s services like blobs and files and how it works when private / service endpoint is enabled on Storage Account. I agree the MS Docs topics about Storage Account services and configuration are not easy to understand, however i think it is a must to have this knowledge when enabling private endpoint / service endpoint.

Private Endpoint Service Endpoint Limitations

So basically there are some options available:

I just want to share my experience and this is how we are doing that and we are pretty happy with those options so far.

Happy coding!

Since this is only a problem of the container/filesystem resources, I am using an ARM template as a replacement for that. Code is quite simple:

resource "random_id" "randomId" {
  byte_length = 6
}

resource "azurerm_template_deployment" "container" {
  count               = var.account.file_systems
  depends_on          = [ azurerm_storage_account.account ]
  name                = "${azurerm_storage_account.account.name}-container-${random_id.randomId.hex}"
  resource_group_name = var.resource_group_name
  deployment_mode     = "Incremental"
  template_body       = file("${path.module}/container.json")
  parameters          = {
    storage_account_name = azurerm_storage_account.account.name
    container_name       = var.account.file_systems[count.index].container_name
  }
}

With a container.json file in the same folder:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "storage_account_name": {
            "type": "string"
        },
        "container_name": {
            "type": "string"
        }
    },
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts/blobServices/containers",
            "name": "[concat(parameters('storage_account_name'), '/default/', parameters('container_name'))]",
            "apiVersion": "2021-02-01",
            "properties": {}
        }
    ]
}

Since it’s, by far, the most voted issue and a very old one. I think you should close this as “will not be fixed” AND add a warning note on azurerm_storage_share resource about this “know upstream issue”, with a link to this bug.

Hi, I’ve had a read through https://github.com/terraform-providers/terraform-provider-azurerm/pull/9314 and noted there was a dependency on an upstream Storage API change before being able to improve this behaviour in the azurerm terraform provider. Is there an update on how far those changes have progressed and when we expect the terraform provider to be able to make use of those upstream changes?

Hello, It’s seems related to this azure-cli issue: https://github.com/Azure/azure-cli/issues/10190

Currently, the creation of a storage container resource (blob, share) seems to use the storage container API which is behind the firewall. Instead, it should use the Resource Manager provider. In the issue mentionned above, I just discover that az cli has a az storage share-rm create in addition to existing az storage share create. I don’t know if there is an equivalent for blob, and if this exists in the azure rest API or in terraform 😃

Is this still reproducible?

Yes, also just ran into this today.

The whole point of having an API to spin up resources in the cloud is to be able to do this from anywhere and the resources themselves are restricted. I am bewildered by the fact that it appears the Azure API to interact with storage shares are subject to the network restrictions of the storage account.

resource "azurerm_storage_account" "example" {
  resource_group_name       = azurerm_resource_group.example.name
  location                  = azurerm_resource_group.example.location
  name                      = "example"
  account_kind              = "FileStorage"
  account_tier              = "Premium"
  account_replication_type  = "LRS"
  enable_https_traffic_only = false
}

resource "azurerm_storage_account_network_rules" "example" {
  depends_on = [azurerm_storage_share.example]

  storage_account_id         = azurerm_storage_account.example.id
  default_action             = "Deny"
  virtual_network_subnet_ids = [azurerm_subnet.example.id]
  
  # AZURE LIMITATION:
  #   interactions with storage shares inside a storage account through the Azure API are subject to these restrictions?
  #   ...so all future executions of Terraform break if one doesn't poke oneself a hole for wherever we are running Terraform from
  // ip_rules = [chomp(data.http.myip.body)]
}
// data "http" "myip" {
//   url = "http://icanhazip.com"
// }

resource "azurerm_storage_share" "example" {
  name                 = "example-storage-share"
  storage_account_name = azurerm_storage_account.example.name
  enabled_protocol     = "NFS"
}

Otherwise:

│ Error: shares.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailure" Message="This request is not authorized to perform this operation.\nRequestId:XXXXXXX-YYYY-YYYY-YYYY-ZZZZZZZZZZZZ\nTime:2021-11-10T18:07:08.8135873Z"
│ 
│   with azurerm_storage_share.example,
│   on storage.tf line 47, in resource "azurerm_storage_share" "example":
│   47: resource "azurerm_storage_share" "example" {

Providing another workaround based on the azapi provider:

resource "azapi_resource" "test" {
  name      = "acctestmgd"
  parent_id = "${azurerm_storage_account.test.id}/blobServices/default"
  type      = "Microsoft.Storage/storageAccounts/blobServices/containers@2021-04-01"
  body      = "{}"
}

Resolution of PR #14220 will fix this.

I can confirm that it very much looks like azapi solves the similar problem we have; i.e. how to have Terraform add containers to storage accounts that do not have public internet ingress for the data plane.

Is it possible for this provider to rework the resource to use the resource management API for this operation? The RM API is internet-accessible, which means we don’t have to do anything with network rules regarding where terraform apply is executing.

I ran into this awhile back and would manually create the containers. I finally figured out the simplified AzAPI code and am posting it here. This will create a storage account that disables public access and enables NFSv3, then create a container in that account.

resource "azurerm_storage_account" "group_blob_storage" {
  name                      = "example_storage_account"
  resource_group_name       = local.app_rg_name
  location                  = local.location
  account_kind              = "StorageV2"
  account_tier              = "Standard"
  access_tier               = "Hot"
  account_replication_type  = "LRS"
  enable_https_traffic_only = true
  is_hns_enabled            = true
  nfsv3_enabled             = true
  min_tls_version           = "TLS1_2"
  allow_blob_public_access  = false
  tags                      = local.default_tags
  lifecycle {
    ignore_changes = [
      tags["CreationDate"],
    ]
  }
  network_rules {
    default_action = "Deny"
  }
}

resource "azapi_resource" "group_blob_containers" {
  type      = "Microsoft.Storage/storageAccounts/blobServices/containers@2022-09-01"
  name      = "mycontainer"
  parent_id = "${azurerm_storage_account.group_blob_storage.id}/blobServices/default"
  body = jsonencode({
    properties = {
      defaultEncryptionScope      = "$account-encryption-key"
      denyEncryptionScopeOverride = false
      enableNfsV3AllSquash        = false
      enableNfsV3RootSquash       = false
      metadata                    = {}
      publicAccess                = "None"
    }
  })
  depends_on = [
    azurerm_storage_account.group_blob_storage
  ]
}

You can change the json-encoded properties as needed.

If someone need to work around this issue for a storage account of type “FileStorage”, e.g. for a NFS-Share, this example code worked for us (based on the previous replies with deployment templates):

resource "azurerm_storage_account" "example-nfs" {
  name                      = "examplenfs"
  resource_group_name       = azurerm_resource_group.example.name
  location                  = azurerm_resource_group.example.location
  account_tier              = "Premium"
  account_kind              = "FileStorage"
  account_replication_type  = "LRS"
  enable_https_traffic_only = false

  network_rules {
    default_action             = "Deny"
    # ip_rules                   = ["127.0.0.1/24"]
    virtual_network_subnet_ids = [azurerm_subnet.example_subnet_1]
    bypass                     = ["AzureServices"]
  }
}

# NOTE Normally, we will do the following azurerm_storage_share.
#  Due to https://github.com/hashicorp/terraform-provider-azurerm/issues/2977
#  this isn't possible right now. So we working around with an ARM template, see
#  post https://github.com/hashicorp/terraform-provider-azurerm/issues/2977#issuecomment-875693407
# resource "azurerm_storage_share" "example-nfs_fileshare" {
#   name                 = "example"
#   storage_account_name = azurerm_storage_account.example-nfs.name
#   quota                = 100
#   enabled_protocol     = "NFS"
# }
resource "azurerm_resource_group_template_deployment" "example-nfs_fileshare" {
  name                = "${azurerm_storage_account.example-nfs.name}-fileshare-example"
  resource_group_name = azurerm_resource_group.example.name
  deployment_mode     = "Incremental"

  parameters_content = jsonencode({
    "storage_account_name" = {
      value = azurerm_storage_account.example-nfs.name
    }
    "fileshare_name" = {
      value ="example"
    }
  })
  template_content = <<TEMPLATE
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storage_account_name": {
      "type": "string"
    },
    "fileshare_name": {
      "type": "string"
    }
  },
  "variables": {},
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts/fileServices/shares",
      "name": "[concat(parameters('storage_account_name'), '/default/', parameters('fileshare_name'))]",
      "apiVersion": "2021-02-01",
      "properties": {
        "shareQuota": 100,
        "enabledProtocols": "NFS"
      }
    }
  ]
}
TEMPLATE

  depends_on = [azurerm_storage_account.example-nfs]
}

I went through our terraform storage code and refactored it to leverage private endpoints, and I removed the vnet from the network rules in the process of doing that in order to confirm it is really using the private endpoint.

It works beautifully but there are some caveats.

  • In order to use private endpoints, your subnet must not be enforcing private endpoint network rules. This is a simple true/false argument named enforce_private_link_endpoint_network_policies in the azurerm_subnet resource. Despite the name of the argument, it must be set to true in order to allow private endpoints to be created.
    • NOTE: There is a separate argument called enforce_private_link_service_network_policies which you do not need to change for this. Ensure you set the one with “endpoint” in the argument name if you are trying to create private endpoints for storage, event hubs, etc.
  • Additionally, in order to create a private endpoint, your storage account must already exist in order to provide the azurerm_private_endpoint resource with the resource ID of your azurerm_storage_account. This means you CANNOT define your network_rules block inside the azurerm_storage_account resource, but instead must create the storage account without network rules, then create a Private DNS Zone, followed by 1-2 private endpoints, followed by applying network rules via the azurerm_storage_account_network_rules resource, and finally creating your azurerm_storage_container.
    • 2 private endpoints is recommended in order to provide better performance. One endpoint connects to the primary subresource (storage container connection in this case, which for me is “blob”) and one for the secondary subresource (the “blob_secondary” storage container connection for me).

I found some sample code here https://www.debugaftercoffee.com/blog/using-terraform-with-private-link-enabled-services and adapted it to my needs. Additional samples are found here: https://github.com/hashicorp/terraform-provider-azurerm/tree/main/examples/private-endpoint and I found that the example in the private-dns-group subdirectory of the second link was most helpful in getting the DNS zone group configured properly for my storage resources.

I hope this helps. Let me know if anyone has questions.

hi, Is there any plan to fix this ?

When using Azure DevOps hosted agents to deploy, I ended up writing this piece of Powershell that invokes Azure CLI to allow that specific agent’s public IP address to be allowed into the Storage Account that had IP restrictions enabled. Like @jeffadavidson

It’s a script you can call as part of your deployments that will toggle the public IP of that agent either on or off (-mode switch).

As mentioned I use it for Azure DevOps pipeline deployments, but it could be used anywhere else by other deployment tools…

<#
.SYNOPSIS
Set (by mode: ON OFF) the Storage Account Firewall Rules by Public IP address. Used by Azure DevOps Build/Release agents
See here : https://github.com/terraform-providers/terraform-provider-azurerm/issues/2977
.DESCRIPTION
Using Azure CLI
.EXAMPLE
.\SetMode_PublicIPAddress_SA.ps1 -storageaccount sa12345random -resourcegroup RG-NDM-TEST -mode on
.NOTES
Written by Neil McAlister - March 2020
#>
param (
	[Parameter(Mandatory=$true)]
	[string]$storageaccount,
	[Parameter(Mandatory=$true)]
        [string]$resourcegroup,
        [Parameter(Mandatory=$true)]
	[string]$mode
)
#
$ip = Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
write-host $ip
#
if ($mode -eq 'on') { 
az storage account network-rule add --resource-group $resourcegroup --account-name $storageaccount --ip-address $ip
} 
#
if ($mode -eq 'off') {
az storage account network-rule remove --resource-group $resourcegroup --account-name $storageaccount --ip-address $ip
}

I have this with as a step in my deployments with a -mode on that allows access to the SA

I also have another step at the end with -mode off Note that you should run the -mode off step even if your deployment fails/crashes out, otherwise your SA firewall rules are going to get messy with lots of orphaned IP addresses in it.

If you are using YAML based pipelines, that setting is…

condition: always()

…if using GUI based releases it is a setting under ADVANCED options

There is another reason for using AZAPI that is related to this issue. It surfaces when you disable storage account Shared Key authentication, as per “well architected framework” guidance:

In order to make this work with Terraform, you need to add storage_use_azuread = true to your provider block, i.e. something like this:

provider "azurerm" {
  features {}
  storage_use_azuread        = true
}

This changes the behaviour of Terraform so that instead of fetching the shared keys and using those, it uses EntraID/AzureAD permissions of the principal running Terraform.

Then, if you try and create a container using azurerm_storage_container it will fail, but if you use the AZAPI provider, it works. Similar reasons to the firewall problem noted in this issue.

Obviously, if you want to do any data plane operations, you would need to set IAM appropriately, however if you just want to create containers and set permissions on them, AZAPI works fine.

The GitHub action in this repository illustrates the behaviour of AZAPI vs AzureRM for this use case.

It’s worth noting there are some limitations when disabling shared key authentication, e.g. when using the table & files API as per the AzureRM provider documentation, however this approach works well for things like Terraform state (or any blob workload), and is useful where alignment to WAF principles is a requirement.

I ran into this problem today as well, and did not want to go via the template route. As a result I wrote the following using the azapi provider

# main.tf
resource "random_uuid" "acl" {}

# Can't use the terraform azurerm provider because that uses the storage api directly
# and since the storage account may (most probably) has public access disabled that API fails
# Documentation for this is available at
# https://learn.microsoft.com/en-us/azure/templates/microsoft.storage/2022-09-01/storageaccounts/fileservices/shares?pivots=deployment-language-terraform
resource "azapi_resource" "this" {
  type      = "Microsoft.Storage/storageAccounts/fileServices/shares@2022-09-01"
  name      = var.name
  parent_id = "${var.storage_account_id}/fileServices/default"
  body = jsonencode({
    properties = {
      accessTier       = var.access_tier
      enabledProtocols = var.enabled_protocol
      shareQuota       = var.claims_storage_quota_gb
      signedIdentifiers = [
        {
          accessPolicy = {
            permission = var.permissions
          }
          id = random_uuid.acl.result
        }
      ]
    }
  })
  response_export_values = ["*"]
}

# vars.tf
variable "name" { type = string }
variable "storage_account_id" { type = string }
variable "access_tier" { type = string }
variable "enabled_protocol" { type = string }
variable "claims_storage_quota_gb" { type = number }
variable "permissions" { type = string }

this works just fine with the service principal

Like to add a note from the field, as I have encountered this kind of issue a lot myself in the last week. This is a wider issue I would describe as:

  • Any azurerm provider that relies on ARM resource endpoints i.e. the “data plane”, can (sorta obviously) be broken by inability of the terraform “agent” to access the data plane (be it your TFC cloud agent, or DevOps cloud agent, or self hosted agent and misconfigured private network / privatelink DNS
  • Providers like azurerm_key_vault_secret , azurerm_storage_container , azurerm_storage_data_lake_gen2_filesystem, azure_synapse_role_assignment - rely on the resource API endpoint
  • Reasons for lack of access to the resource API include - privatelink blocked by private network firewall - blocked by resource network_acls rules or firewall rules - misconfigured on-premises privatelink zone DNS fowarding - missing privatelink private DNS zone A record - maybe missing Key Vault access policy(?)
  • Terraform state refresh can be blocked i.e. before apply, for existing resources where there is no longer data plane access, due to reasons
  • Terraform apply can be broken during initial apply i.e. during resource creation, due to reasons
  • In some scenarios where network configuration combines with terraform plan that does not understand dependencies on privatelink DNS record for data plane access - a depends_on clause may be helpful. Sometimes it is not so easy due to dependency loop.

Is there anywhere azurerm providers document their dependency on the data plane API instead of ARM management plane API - as sometimes it’s not obvious (e.g. azurerm_storage_data_lake_gen2_filesystem)

@494206 I didn’t meant to criticise about your comment, just ensure things are clear 😃

There is a workaround to use the azapi provider to manage the storage container using pure its management plane API, which won’t be impacted by any DNS resolve/firewall issues: https://github.com/hashicorp/terraform-provider-azurerm/issues/2977#issuecomment-1127384422

If I understand this and the other related threads, successfully managing private endpoints on storage accounts with Terraform is currently only possible:

  • in certain specific use cases
  • or with “hacky” (insecure) workarounds
  • or if a private endpoint and private DNS zone are created for every storage account endpoint type

Seconding what @bacatta said – there really should be a warning on the documentation page for azurerm_storage_account, about this.

I’d go a little further and have that note state that private endpoints are currently not ‘officially’ supported for storage accounts by the Azurerm provider. It would be a stretch to argue otherwise at the moment, IMHO.

@tspearconquest Haha, great minds! I edited out my comment where I was talking about going to go down that route of using the local-exec provider to run Azure-CLI commands to build this stuff out. The issue here is that it looks like you need to run an az login before you can actually run the commands.

I may just write an azure CLI script to have our customer run to create the storage account/container for us to use to enable remote state for the bigger terraform deployment we’ll be rolling into their environments. The whole reason for this was using terraform to build a storage account/container to be used in a subsequent terraform deployment for the remote state. I wanted to build out the entire deployment with terraform, but in this state, it is not possible.

I appreciate all of your effort onto this subject. I hope that the provider can be resolved soon and that Microsoft wakes-up to this being a real issue.

Hi @TheKangaroo Yes they can all be defined in a single .tf file and created in a single run, but the network rules must be defined as separate resource from the storage account; meaning you can’t include the network rules block in the storage account resource.

While adding the vnet to the network rules is a solution, this routes your traffic over the public internet, which is not as ideal as having completely private storage.

An alternative solution that I am investigating using is private endpoints.

When you create a private endpoint for a storage container, a private DNS zone for the storage is created and associated to a vnet where you put the endpoint. This allows the resources in that vnet to resolve the azure storage IP as a private IP, so connections from that vnet will traverse the microsoft backbone properly instead of going over the public internet.

This will go around any network rules you put on the storage because network rules only apply to the public IP, so you would create private endpoints for a vnet that needs access, and then you can either peer other vnets to that one, or create new endpoints for the other vnets to prevent sharing resources between the vnets.

In Terraform, this does require that you initially create an azurerm_storage_account resource without a network_rules block, then create an azurerm_private_dns_zone resource, an azurerm_private_dns_zone_virtual_network_link resource, an azurerm_private_endpoint resource, and then apply the network rules using the azurerm_storage_account_network_rules resource.

We are facing the same issue and are really surprised that there is until now no good solution existing for that kind of issues.

We have the Azure Policy “Storage Accounts should disable public network access” enabled with “deny” and so it’s even not possible atm to allow public/restricted network access to the storage account for deploying the containers.

This issue was fixed for me with the latest AzureRm provider version I used version 3.70.0

Was it? I’m trying to create a container after my storage account has been created with a private link and it won’t allow me. I’m on 3.73.0 too

If you are running from the pipeline make sure you have added pipeline IP in network restriction, Also add 'Storage Blob Data Contributor ’ to your current objected (data.azurerm_client_config.current.object_id)

I understand that and totally appreciate that option would work. It’s probably that I’m coming from a different angle in that if you deploy via Bicep you don’t need to open up any IPs as this is all done via the API.

Obviously Bicep and Terraform are two different products with different ways of working, so I’ll just have to adjust accordingly

When they are in the same region, then the traffic is NOT routed via internet and when they are in the same region, then it is. The Storage Account Network Rules applies only to traffic routed via the internet, so for same-region requests, the fix is to use a private endpoint. You need a private endpoint in the storage account. See this comment for more details.

You can also refer here for more about connecting devops to a storage account private endpoint

@tombuildsstuff @magodo This issue is here for 3.5 years and I cannot see the end of it. It is blocking huge amount of work for storage accounts that are forced to use a firewall.

Is it possible to make any “temporary” solution until MSFT will implement something from their side? Bringing something to terraform provider takes one week (until next release is there), and waiting for MSFT already takes much more 😃

It should build the dependencies correctly based on the resource IDs being included, however I chose in my code to explicitly define them in order to make certain that diagnostic settings for my AKS cluster are not created until the storage container is created.

My diagnostic settings don’t explicitly depend upon the storage, but rather we use a tool running in the cluster to extract the audit logs from an event hub and that tool itself is what requires the storage. So the implicit dependency is not known to Terraform and for that reason is why I chose to define the dependencies explicitly.

Hello tspearconquest,

I tried the approach you recommended, but am running into issues i.e. the private endpoint is not being utilized, can you please validate and let me know if am missing anything:

  1. Created the following resources for the self-hosted build agent in a separate resource group - azurerm_virtual_network, azurerm_subnet, azurerm_private_dns_zone, azurerm_private_dns_zone_virtual_network_link.
  2. Created the datalake and related resources in a different resource group and added the azurerm_private_endpoint resource with the subnet_id, private_dns_zone_group pointing to the resources created for the build agent. Pointed the private_service_connection private_connection_resource_id to the datalake storage account id.

The issue is that on re-running the terraform apply for the datalake, it is unable to access the containers (getting the 403 error) i.e. it does not seem to be using the private endpoint created using the process above.

@andyr8939 comment is the correct fix until #14220 is implemented.

This solution (“add the vnet to the network rules”) worked for us

I had this same issue where I created a Premium SKU File Share with Terraform 1.0.2 on Azure, but when I locked it down to a VNET and Public IPs my Build Agents got 403 not authorized. If I built locally from my workstation it would work, but even with the public IP of my self hosted build agents for Azure Devops, it still failed.

Then I found this mentioned on the resource documentation - https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account_network_rules#ip_rules

IP network rules have no effect on requests originating from the same Azure region as the storage account. Use Virtual network rules to allow same-region requests. Services deployed in the same region as the storage account use private Azure IP addresses for communication. Thus, you cannot restrict access to specific Azure services based on their public outbound IP address range.

So that means for me anyway, as my build agents were in the same region in Azure as the file share, they were getting the internal IP, not public. To fix I added the build VM vnet into the allowed virtual networks on the file share and now it works fine.

The reason is most likely that listing existing storage containers in a storage account directly accesses the storage accounts REST API. It fails if there are firewall rules in place not containing the ip of the host terraform runs on. It works if this IP is added. However, finding that IP is a challenge when terraform is run from Azure Devops as we do. This might not be easy to fix. Maybe storage account firewall rules should be their own resources that need to be added last in a deployment? Or creating a storage container resource first disables the firewall on the storage account and enables it afterwards?