terraform-provider-azurerm: Cannot create azurerm_storage_container in azurerm_storage_account that uses network_rules
Community Note
- Please vote on this issue by adding a đ reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave â+1â or âme tooâ comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform (and AzureRM Provider) Version
Terraform v0.11.11
- provider.azurerm v1.21.0
Affected Resource(s)
azurerm_storage_accountazurerm_storage_container
Terraform Configuration Files
resource "azurerm_storage_account" "test-storage-acct" {
name = "${var.prefix}storacct"
resource_group_name = "${var.resgroup}"
location = "${var.location}"
account_tier = "Standard"
account_replication_type = "LRS"
network_rules {
ip_rules = ["aaa.bbb.ccc.ddd/ee"]
virtual_network_subnet_ids = ["${var.subnetid}"]
}
}
resource "azurerm_storage_container" "provisioning" {
name = "${var.prefix}-provisioning"
resource_group_name = "${var.resgroup}"
storage_account_name = "${azurerm_storage_account.test-storage-acct.name}"
container_access_type = "private"
}
Debug Output
- azurerm_storage_container.provisioning: Error creating container âphiltesting1-provisioningâ in storage account âphiltesting1storacctâ: storage: service returned error: StatusCode=403, ErrorCode=AuthorizationFailure, ErrorMessage=This request is not authorized to perform this operation. RequestId:a7f9d2e1-701e-00b3-4e74-cf3b34000000 Time:2019-02-28T14:45:53.7885750Z, RequestInitiated=Thu, 28 Feb 2019 14:45:53 GMT, RequestId=a7f9d2e1-701e-00b3-4e74-cf3b34000000, API Version=, QueryParameterName=, QueryParameterValue=
Expected Behavior
Container can be created in a storage account that uses network rules
Actual Behavior
After applying a network_rule to a storage account I cannot provision a container into it. My public IP is included in the address range specified in the network rule. I can successfully create the container via the Azure portal
Steps to Reproduce
terraform apply
About this issue
- Original URL
- State: open
- Created 5 years ago
- Reactions: 338
- Comments: 94 (23 by maintainers)
We just ran into this ourselves. Nice to see someone else has already raised the issue with excellent documentation.
The workaround we are testing is to call out to an ARM template for creating the containers. This is not ideal for several reasons:
But itâs what weâve got. This could be a workaround for you if you need this.
Iâm using two parts - a JSON file with the ARM, and a Terraform
azurerm_template_deploymentstorage-containers.jsonmain.tfJust reading the issue from top to the bottom about storage account configuration and iâm not sure if TF is the right place for this challenge (i would not call it issue) as TF have no possibility to change the way how Azure services works.
It all goes down to how Azure Storage Account works and itâs services like blobs and files and how it works when private / service endpoint is enabled on Storage Account. I agree the MS Docs topics about Storage Account services and configuration are not easy to understand, however i think it is a must to have this knowledge when enabling private endpoint / service endpoint.
Private Endpoint Service Endpoint Limitations
So basically there are some options available:
azapiwhich is using ARM API (not using storage service endpoints and not impacted by firewall / network rules) mentioned by @magodoprivate runner / agentand whitelist the IP (probably the most easier and reliable solution so far for both GH and AZD) Deploy self-hosted CI/CD runners and agents with Azure Container Apps jobsI just want to share my experience and this is how we are doing that and we are pretty happy with those options so far.
Happy coding!
Since this is only a problem of the container/filesystem resources, I am using an ARM template as a replacement for that. Code is quite simple:
With a
container.jsonfile in the same folder:Since itâs, by far, the most voted issue and a very old one. I think you should close this as âwill not be fixedâ AND add a warning note on azurerm_storage_share resource about this âknow upstream issueâ, with a link to this bug.
Hi, Iâve had a read through https://github.com/terraform-providers/terraform-provider-azurerm/pull/9314 and noted there was a dependency on an upstream Storage API change before being able to improve this behaviour in the
azurermterraform provider. Is there an update on how far those changes have progressed and when we expect the terraform provider to be able to make use of those upstream changes?Hello, Itâs seems related to this azure-cli issue: https://github.com/Azure/azure-cli/issues/10190
Currently, the creation of a storage container resource (blob, share) seems to use the storage container API which is behind the firewall. Instead, it should use the Resource Manager provider. In the issue mentionned above, I just discover that az cli has a
az storage share-rm createin addition to existingaz storage share create. I donât know if there is an equivalent for blob, and if this exists in the azure rest API or in terraform đYes, also just ran into this today.
The whole point of having an API to spin up resources in the cloud is to be able to do this from anywhere and the resources themselves are restricted. I am bewildered by the fact that it appears the Azure API to interact with storage shares are subject to the network restrictions of the storage account.
Otherwise:
Providing another workaround based on the azapi provider:
Resolution of PR #14220 will fix this.
I can confirm that it very much looks like
azapisolves the similar problem we have; i.e. how to have Terraform add containers to storage accounts that do not have public internet ingress for the data plane.Is it possible for this provider to rework the resource to use the resource management API for this operation? The RM API is internet-accessible, which means we donât have to do anything with network rules regarding where terraform apply is executing.
I ran into this awhile back and would manually create the containers. I finally figured out the simplified AzAPI code and am posting it here. This will create a storage account that disables public access and enables NFSv3, then create a container in that account.
You can change the json-encoded properties as needed.
If someone need to work around this issue for a storage account of type âFileStorageâ, e.g. for a NFS-Share, this example code worked for us (based on the previous replies with deployment templates):
I went through our terraform storage code and refactored it to leverage private endpoints, and I removed the vnet from the network rules in the process of doing that in order to confirm it is really using the private endpoint.
It works beautifully but there are some caveats.
enforce_private_link_endpoint_network_policiesin theazurerm_subnetresource. Despite the name of the argument, it must be set totruein order to allow private endpoints to be created.enforce_private_link_service_network_policieswhich you do not need to change for this. Ensure you set the one with âendpointâ in the argument name if you are trying to create private endpoints for storage, event hubs, etc.azurerm_private_endpointresource with the resource ID of yourazurerm_storage_account. This means you CANNOT define yournetwork_rulesblock inside theazurerm_storage_accountresource, but instead must create the storage account without network rules, then create a Private DNS Zone, followed by 1-2 private endpoints, followed by applying network rules via theazurerm_storage_account_network_rulesresource, and finally creating yourazurerm_storage_container.I found some sample code here https://www.debugaftercoffee.com/blog/using-terraform-with-private-link-enabled-services and adapted it to my needs. Additional samples are found here: https://github.com/hashicorp/terraform-provider-azurerm/tree/main/examples/private-endpoint and I found that the example in the
private-dns-groupsubdirectory of the second link was most helpful in getting the DNS zone group configured properly for my storage resources.I hope this helps. Let me know if anyone has questions.
hi, Is there any plan to fix this ?
When using Azure DevOps hosted agents to deploy, I ended up writing this piece of Powershell that invokes Azure CLI to allow that specific agentâs public IP address to be allowed into the Storage Account that had IP restrictions enabled. Like @jeffadavidson
Itâs a script you can call as part of your deployments that will toggle the public IP of that agent either on or off (-mode switch).
As mentioned I use it for Azure DevOps pipeline deployments, but it could be used anywhere else by other deployment toolsâŚ
I have this with as a step in my deployments with a -mode on that allows access to the SA
I also have another step at the end with -mode off Note that you should run the -mode off step even if your deployment fails/crashes out, otherwise your SA firewall rules are going to get messy with lots of orphaned IP addresses in it.
If you are using YAML based pipelines, that setting isâŚ
condition: always()âŚif using GUI based releases it is a setting under ADVANCED options
There is another reason for using AZAPI that is related to this issue. It surfaces when you disable storage account Shared Key authentication, as per âwell architected frameworkâ guidance:
In order to make this work with Terraform, you need to add
storage_use_azuread = trueto your provider block, i.e. something like this:This changes the behaviour of Terraform so that instead of fetching the shared keys and using those, it uses EntraID/AzureAD permissions of the principal running Terraform.
Then, if you try and create a container using azurerm_storage_container it will fail, but if you use the AZAPI provider, it works. Similar reasons to the firewall problem noted in this issue.
Obviously, if you want to do any data plane operations, you would need to set IAM appropriately, however if you just want to create containers and set permissions on them, AZAPI works fine.
The GitHub action in this repository illustrates the behaviour of AZAPI vs AzureRM for this use case.
Itâs worth noting there are some limitations when disabling shared key authentication, e.g. when using the table & files API as per the AzureRM provider documentation, however this approach works well for things like Terraform state (or any blob workload), and is useful where alignment to WAF principles is a requirement.
I ran into this problem today as well, and did not want to go via the template route. As a result I wrote the following using the
azapiproviderthis works just fine with the service principal
Like to add a note from the field, as I have encountered this kind of issue a lot myself in the last week. This is a wider issue I would describe as:
Is there anywhere azurerm providers document their dependency on the data plane API instead of ARM management plane API - as sometimes itâs not obvious (e.g. azurerm_storage_data_lake_gen2_filesystem)
@494206 I didnât meant to criticise about your comment, just ensure things are clear đ
There is a workaround to use the
azapiprovider to manage the storage container using pure its management plane API, which wonât be impacted by any DNS resolve/firewall issues: https://github.com/hashicorp/terraform-provider-azurerm/issues/2977#issuecomment-1127384422If I understand this and the other related threads, successfully managing private endpoints on storage accounts with Terraform is currently only possible:
Seconding what @bacatta said â there really should be a warning on the documentation page for azurerm_storage_account, about this.
Iâd go a little further and have that note state that private endpoints are currently not âofficiallyâ supported for storage accounts by the Azurerm provider. It would be a stretch to argue otherwise at the moment, IMHO.
@tspearconquest Haha, great minds! I edited out my comment where I was talking about going to go down that route of using the local-exec provider to run Azure-CLI commands to build this stuff out. The issue here is that it looks like you need to run an az login before you can actually run the commands.
I may just write an azure CLI script to have our customer run to create the storage account/container for us to use to enable remote state for the bigger terraform deployment weâll be rolling into their environments. The whole reason for this was using terraform to build a storage account/container to be used in a subsequent terraform deployment for the remote state. I wanted to build out the entire deployment with terraform, but in this state, it is not possible.
I appreciate all of your effort onto this subject. I hope that the provider can be resolved soon and that Microsoft wakes-up to this being a real issue.
Hi @TheKangaroo Yes they can all be defined in a single .tf file and created in a single run, but the network rules must be defined as separate resource from the storage account; meaning you canât include the network rules block in the storage account resource.
While adding the vnet to the network rules is a solution, this routes your traffic over the public internet, which is not as ideal as having completely private storage.
An alternative solution that I am investigating using is private endpoints.
When you create a private endpoint for a storage container, a private DNS zone for the storage is created and associated to a vnet where you put the endpoint. This allows the resources in that vnet to resolve the azure storage IP as a private IP, so connections from that vnet will traverse the microsoft backbone properly instead of going over the public internet.
This will go around any network rules you put on the storage because network rules only apply to the public IP, so you would create private endpoints for a vnet that needs access, and then you can either peer other vnets to that one, or create new endpoints for the other vnets to prevent sharing resources between the vnets.
In Terraform, this does require that you initially create an
azurerm_storage_accountresource without anetwork_rulesblock, then create anazurerm_private_dns_zoneresource, anazurerm_private_dns_zone_virtual_network_linkresource, anazurerm_private_endpointresource, and then apply the network rules using theazurerm_storage_account_network_rulesresource.We are facing the same issue and are really surprised that there is until now no good solution existing for that kind of issues.
We have the Azure Policy âStorage Accounts should disable public network accessâ enabled with âdenyâ and so itâs even not possible atm to allow public/restricted network access to the storage account for deploying the containers.
I understand that and totally appreciate that option would work. Itâs probably that Iâm coming from a different angle in that if you deploy via Bicep you donât need to open up any IPs as this is all done via the API.
Obviously Bicep and Terraform are two different products with different ways of working, so Iâll just have to adjust accordingly
When they are in the same region, then the traffic is NOT routed via internet and when they are in the same region, then it is. The Storage Account Network Rules applies only to traffic routed via the internet, so for same-region requests, the fix is to use a private endpoint. You need a private endpoint in the storage account. See this comment for more details.
You can also refer here for more about connecting devops to a storage account private endpoint
@tombuildsstuff @magodo This issue is here for 3.5 years and I cannot see the end of it. It is blocking huge amount of work for storage accounts that are forced to use a firewall.
Is it possible to make any âtemporaryâ solution until MSFT will implement something from their side? Bringing something to terraform provider takes one week (until next release is there), and waiting for MSFT already takes much more đ
Hello tspearconquest,
I tried the approach you recommended, but am running into issues i.e. the private endpoint is not being utilized, can you please validate and let me know if am missing anything:
The issue is that on re-running the terraform apply for the datalake, it is unable to access the containers (getting the 403 error) i.e. it does not seem to be using the private endpoint created using the process above.
@andyr8939 comment is the correct fix until #14220 is implemented.
This solution (âadd the vnet to the network rulesâ) worked for us
I had this same issue where I created a Premium SKU File Share with Terraform 1.0.2 on Azure, but when I locked it down to a VNET and Public IPs my Build Agents got 403 not authorized. If I built locally from my workstation it would work, but even with the public IP of my self hosted build agents for Azure Devops, it still failed.
Then I found this mentioned on the resource documentation - https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account_network_rules#ip_rules
IP network rules have no effect on requests originating from the same Azure region as the storage account. Use Virtual network rules to allow same-region requests. Services deployed in the same region as the storage account use private Azure IP addresses for communication. Thus, you cannot restrict access to specific Azure services based on their public outbound IP address range.So that means for me anyway, as my build agents were in the same region in Azure as the file share, they were getting the internal IP, not public. To fix I added the build VM vnet into the allowed virtual networks on the file share and now it works fine.
The reason is most likely that listing existing storage containers in a storage account directly accesses the storage accounts REST API. It fails if there are firewall rules in place not containing the ip of the host terraform runs on. It works if this IP is added. However, finding that IP is a challenge when terraform is run from Azure Devops as we do. This might not be easy to fix. Maybe storage account firewall rules should be their own resources that need to be added last in a deployment? Or creating a storage container resource first disables the firewall on the storage account and enables it afterwards?