terraform-provider-tfe: tfe_workspace import breaks when using terraform token in variable and remote execution

Steps to reproduce

  • Manually create a workspace default in https://app.terraform.io
  • Add a Terraform Variable tfe_token, which contains a User Token as the secret
  • Setup local ~/.terraformrc
  • Create a main.tf like below
  • Run terraform init
  • Run terraform import tfe_workspace.default <org>/new-workspace
  • Will receive an error
    • Error: Error reading configuration of workspace new-workspace: unauthorized

main.tf

terraform {
  backend "remote" {
    organization = "<org>"
    hostname     = "app.terraform.io"
    workspaces { name = "default" }
  }
}

variable "tfe_token" {}

provider "tfe" {
  token    = var.tfe_token
  hostname = "app.terraform.io"
}

resource "tfe_workspace" "default" {
  organization      = "<org>"
  name              = "new-workspace"
}

Notes

  • I tried setting an environment variable TFE_TOKEN in workspace, does not work
  • I hardcoded the provider with the token instead of using the variable, works fine
  • I tried the token directly using curl to hit the TFE API, works fine
  • This seems to be broken for other providers such as AWS as well

Is this behavior because of Terraform Cloud’s sensitive “write only” secrets being unreadable when running import from my local machine? If thats the case, then there needs to be an option to run arbitrary commands like import on the remote executor.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 3
  • Comments: 15 (8 by maintainers)

Most upvoted comments

Any update on this? Also ran into the exact same problem with tf v0.12.23.

Interesting! Thank you so much for sending all this info @kuwas. I’m going to have a closer look at the related code and test some things myself as well. Will update here in the next day or so…

@svanharmelen Since you didn’t specify the token to the provider tfe and your import worked, you probably have the TFE_TOKEN env var set locally and it was picked up by the provider automatically.

Please try the following example but make sure to:

  • Local TFE_TOKEN env var is unset
  • Variable tfe_token is not set locally though TF_VAR_ env var
  • Variable tfe_token is set as a terraform variable remotely in the TFE/TFC workspace
  • Variable tfe_token is referenced by TFE provider as the token argument
  • No *.auto.tfvars exist locally or in the repo
terraform {
  backend "remote" {
    organization = "<org>"
    hostname     = "app.terraform.io"
    workspaces { name = "default" }
  }
}

variable "tfe_token" {}

provider "tfe" {
  token    = var.tfe_token
  hostname = "app.terraform.io"
}

resource "tfe_workspace" "default" {
  organization      = "<org>"
  name              = "new-workspace"
}

By the way, this issue isnt specific to this provider, I’ve had to manually provide secrets to the Github, AzureRM, and Google providers, in order to do any import operations.

I think its caused by the fact that TFE/TFC workspace variable which are write-only, are not usable by the terraform import command, which uses local secrets. I believed you mentioned that this was expected behaviour.

If we had the functionality to run import commands directly on the TFE/TFC UI or API, which in turn uses the remote instead of local secrets, then that would resolve this.

Thanks

I had the same setup. What he meant was that the master workspace (“default”) has a sensitive tfe_token which gets used to access Terraform Cloud. It’s impossible to import sth. because sensitive variables are neither accessible nor overridable for local commands.

The mentioned workaround was simply to mark the token as non-sensitive so it becomes accessible.

The bug here (apparently also for the AWS Provider) is really the lack of overriding or locally providing such variables.

Ah, I see… I think I didn’t read your initial comment correctly the first time, but now reading it again I see what you are trying to do.

The “problem” here is that only operations are executed remotely, were as commands are always executed locally. Operations in this sense are refresh, plan and apply.

The remote backend currently supports the plan and apply operations, so only they are executed remotely. When trying to run a refresh operation, you will get an error saying that the operation is not yet supported.

All other commands (like import, show or taint for example), are executed locally by downloading the state file, executing the command and then uploading the (changed) state file again.

And while it is possible for the remote backend to also pull down any variables configured on the workspace, it indeed does not have access to any sensitive values. Currently it will only fetch workspace variables for the console command, but any sensitive values are replace with <sensitive> instead of the real value.

So at this point in time the only way to solve your issue, is to make sure you make the required token available locally from where you are trying to execute the import command. Either by exporting the token or by configuring a credentials block in your CLI Config FIle (which is the preferred way to configure TFC credentials).

So I tried setting these credentials a few ways locally, but all of them gave me the same unauthorized response from the API as well.

  • Creating a <name>.tfvars file with the sensitive variables and passing it in using terraform import -var-file <name>.tfvars
  • Setting the credentials block in the ~/.terraformrc cli config file
  • Exporting TFE_TOKEN with the credential locally in the shell

If I had to guess, the "<sensitive>" value is being pulled by the import command and somehow taking precedence over all the other secrets I provided.