terragrunt: Terragrunt fails to get outputs of dependency

I get the following error when running “terragrunt apply” in module “bbb”:

[terragrunt] 2020/09/08 19:46:04 /.../aaa/terragrunt.hcl is a dependency of /.../bbb/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block.

There is however outputs. My code worked with terragrunt 0.23.36, but broke with 0.23.37. It doesn’t work with 0.24.0 either. Using terragrunt version 0.13.2 during all runs.

The setup is as follows:

# module "aaa":
include {
  path = find_in_parent_folders()
}

# terragrunt.hcl in parent folder
dependency "bbb" {
  config_path = "/.../bbb")
}

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 19
  • Comments: 40 (8 by maintainers)

Most upvoted comments

@valdestron when you say

My modules was in plan stage.

Do you mean the modules were in a clean slate with no deployed infrastructure? In that case the dependency fetching is properly giving you an error because there are no outputs (because the dependent module hasn’t been applied) when your configuration expects it. Using mock_outputs to support plan is the proper solution for that.

Problem continues in 2022 latest version…

mock_outputs solution is very poor since plan tells one thing and apply does other. In addition, having production code with “mocks” is very “strange”.

I have worked with Terragrunt and Terraform for a long time. Because this issue hasn’t been fixed for years, I am going to move on from Terragrunt. The positive effects of Terragrunt do not outweigh the negative side-effects, for example the flawed dependency resolution. But for larger projects, I have also noted that breaking the state up doesn’t really benefit the build time in terms of caching and speed. And dividing the state in different substates was exactly one of the most valuable things Terragrunt had to offer.

The problem here is that when running a terragrunt run-all plan, newly added resources or newly added attributes (i.e. outputs) are not being resolved before terragrunt.hcl files that actually use those new attributes/modules as a dependency. This is probably a fundamental issue, hence why it hasn’t been resolved yet. On the other hand, a plan on a submodule outputs newly added attributes, so why can other modules not wait before their dependencies are fully planned and outputs are known?

And then comes the usual answer that you can mock outputs. I don’t want to mock outputs everytime I add a new module or attribute to a module. I only want to mock when I test something, this is not something I want to put inside my production configuration files. A mistake is easily made and when a resource is renamed, it could be destroyed.

Have the same problem, tried everything but didnt get though it. For me it was a reason to not use terragrunt any more.

Has there been any progress on this? This is currently blocking a refactor of our infrastructure, as it will not allow an init of modules which are new and have dependencies. As others have stated, using mock_outputs seems to be a very poor solution that introduces risk between init/plan and apply.

after terragrunt run-all refresh i could see the outputs (without mocking)

Hi, Not sure what was the outcome of the investigation but I face the same issue. The example structure I have looks like this

└── xxx

    ├── aks

    │   └── terragrunt.hcl

    └── aks-addon

        └── terragrunt.hcl

In my “aks-addon” module terragrunt.hcl I defined dependency

dependency "aks" {
  config_path = "../aks"
}

But when I try to start any terragrunt command inside aks-addon directory I get

[terragrunt] [/home/adamplaczek/xxx/aks] 2020/11/26 08:16:35 Running command: terraform init -get=false -get-plugins=false

[terragrunt] [/home/adamplaczek/xxx/aks] 2020/11/26 08:16:38 Running command: terraform output -json

[terragrunt] 2020/11/26 08:16:40 /home/adamplaczek/xxx/aks/terragrunt.hcl is a dependency of /home/adamplaczek/xxx/aks-addons/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block.

But when I enter the dependency manually and start

terraform init -get=false -get-plugins=false
terraform output -json

I can see the output.

If I set disable_dependency_optimization = true on the remote_state block it works. I’m using Azure

There is also an issue with using mock_outputs. If you use mock_outputs, such as a vpc_id and the source module uses a terraform data source, e.g:

data "aws_vpc" "selected" {
  id          = var.vpc_id
}

Then the plan will fail with “no matching vpc id” found, if you use a mock vpc id string, which by the name “mock_outputs” suggests you should be able to use mock strings.

We experimented with examining the plan output and generating locals stubs in the terraform code using a special provider marking the corresponding variables of the outputs in the dependent module as not yet known during plan time.

This way you can still have a glance at the impact and have your changes plannable to some extent but of course with limitations. It has probably been mentioned a couple times before, but things are complex. What if e.g. you are using the kubernetes provider in the dependent module and configure it using a data block using the cluster name got from the first module? The cluster does not exist yet and the data will fail.

Terragrunt can not fix this. Instead the concept should be solved on a higher level in terraform itself. Tools like terragrunt help in splitting things up in an attempt to fix a flaw or unconsidered use case in the design of how terraform works…

@snorrea can you share the code of that Sample repository? From my experience terragrunt works very Well, and often Times it is really just a configuration issue.

I’m facing the same issue. My project will ditch Terragrunt and go for plain Terraform. For context, to try this out, we set up a simple example repo with literally two Azure resources, a resource group and a storage account, as two different modules with a dependency on the resource group from the storage account. While I was originally excited by the prospect of having less repeated code, when fundamental functionality like this isn’t in place, what’s even the point? None of the workarounds work for me. Keeping my fingers crossed this gets picked up soon, meanwhile going WET…

We experienced the same issue when importing resources into terragrunt. Apparently, when running terragrunt state pull in the dependency path, we found that the “outputs” object in the state was empty. This can also be checked by running terragrunt output. Running terragrunt refresh solves it!

Is there any movement to this ? This is a very peculiar case since this is a classic case of terraform plan and identifying dependencies. Mock is a way out though but it is a very misguiding in plan vs apply .

@naul1 FYI terragrunt does not yet officially support tf 1.0. Please follow https://github.com/gruntwork-io/terragrunt/issues/1710 for when we provide support.

I had the same issue and I had the following configuration that caused this bug for me:

config = {
    key                  = "${path_relative_to_include()}/terraform.tfstate"
    resource_group_name  = get_env("REMOTE_STATE_RESOURCE_GROUP", "rg-terragrunt-backend-state")
    storage_account_name = get_env("REMOTE_STATE_STORAGE_ACCOUNT", "stterragruntstate")
    container_name       = get_env("REMOTE_STATE_STORAGE_CONTAINER", "terragrunt")
  }

This means, the backend configuration is dynamic and determined by environment variables. However, I needed to run terragrunt from an automation pipeline (in our case Azure pipelines), and then the same backend configuration is used for the modules and all the dependencies. I think a better practice is to hardcode the configuration in terragrunt.hcl to seamlessly work with different state files for different modules:

config = {
    key                  = "${path_relative_to_include()}/terraform.tfstate"
    resource_group_name  = "rg-terragrunt-backend-state"
    storage_account_name = "stterragruntstate"
    container_name       = "deployment-stamp-eu1-dev"
  }

Hope it helps if someone finds himself in the same situation.

As a side note, the dependency fetching will also naturally fail if you wipe the terragrunt cache, as terragrunt doesn’t know to switch to that workspace when it reinitializes the cache. So if you are wiping the cache in between runs, that can also cause this issue.

I thought https://terragrunt.gruntwork.io/docs/features/caching/ meant we could safely get rid of these or are you talking about something else?

Sadly I get the same problem but for me the disable_dependency_optimization = true doesn’t fix. Basically I can’t delete the alb because it can’t get outputs of it’s security group. So I’m left with a manual delete which is odd.