terraform-provider-aws: Terraform does not read AWS profile from environment variable

This issue was originally opened by @boompig as hashicorp/terraform#8330. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.7.0

Affected Resource(s)

Probably all of AWS, observed with S3.

Terraform Configuration Files

variable "region" {
    default = "us-west-2"
}

provider "aws" {
    region = "${var.region}"
    profile = "fake_profile"
}

resource "aws_s3_bucket" "bucket" {
    bucket = "fakebucket-something-test-1"
    acl = "private"
}

Debug Output

https://gist.github.com/boompig/f05871140b928ae02b8f835d745158ac

Expected Behavior

Should successfully login then give “noop” text.

Actual Behavior

Does not read the correct profile from environment variable. Works if you provide the profile name in the file, though.

Steps to Reproduce

  1. export AWS_PROFILE= your_real_profile
  2. create a terraform file similar to mine with a fake profile in the name
  3. terraform apply

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 26 (7 by maintainers)

Commits related to this issue

Most upvoted comments

Finally, why is this whole profiles thing so janky? Like, just make it simple:

terraform -aws-profile=foo plan

and call it a day already :-\

@seanorama Do you use roles in your profile?

you do not need to set any environment variable and works out of the box.

Terraform 0.9.11

main.tf

variable "region" {
    default = "us-east-1"
}

provider "aws" {
    profile = "production"
    region = "${var.region}"
}

resource "aws_s3_bucket" "bucket" {
    bucket = "fakebucket-something-test-6"
    acl = "private"
}

What works ~/.aws/config

[default]
region = eu-west-1

[profile test]
region = eu-west-1

[profile production]
region = eu-west-1

~/.aws/credentials

[default]
aws_access_key_id = foo
aws_secret_access_key = bar

[test]
aws_access_key_id = baz
aws_secret_access_key = blah

[production]
aws_access_key_id = boo
aws_secret_access_key = baa

What DOES NOT work ~/.aws/config

[default]
region = eu-west-1

[profile test]
region = eu-west-1
role_arn = some_role
source_profile = default

[profile production]
region = eu-west-1
role_arn = some_other_role
source_profile = default

~/.aws/credentials

[default]
aws_access_key_id = foo
aws_secret_access_key = bar

USING roles in profile? This works!

variable "region" {
    default = "us-east-1"
}

provider "aws" {
    region = "${var.region}"
    assume_role {
        role_arn = "some_role"
    }
}

resource "aws_s3_bucket" "bucket" {
    bucket = "fakebucket-something-test-6"
    acl = "private"
}

So after digging further into #2883 I found that AWS_SDK_LOAD_CONFIG needs to be set for AWS_PROFILE to work. There is no mention of that in this issue OR in the public provider documentation: https://www.terraform.io/docs/providers/aws/index.html. This is an acceptable fix but it needs to be documented.

I don’t understand the “AWS_PROFILE” requirement. Why is this required to work, when it is not required in Packer, for example? Simply defining this in the aws{} block should be sufficient.

For example, the following does not work (0.9.6):

aws.tf

provider "aws" {
  region     = "us-east-1"
  profile    = "sandbox"
}

~/.aws/credentials

[sandbox]
aws_access_key_id = FOO
aws_secret_access_key = BAR
region = us-east-1

~/.aws/config

[default]
region = us-west-2

[profile sandbox]
# Nothing required here, see credentials file

Then running: terraform plan

What does work? Same config but: AWS_PROFILE=sandbox terraform plan

So, why does the first fail, while the second work? What is the point of the ENV variable?

(also still trying to figure out how region fits into this whole thing, since it seems to be equally arbitrary)

Just putting it here if someone else finds this problem.

profile works out of the box if you configured it correctly on awscli on awscli 1.11.113 and Terraform v0.10.4

aws configure --profile newprofile

provider "aws" {
  region = "eu-west-2"
  profile = "newprofile"
} 

So after digging further into #2883 I found that AWS_SDK_LOAD_CONFIG needs to be set for AWS_PROFILE to work. There is no mention of that in this issue OR in the public provider documentation: https://www.terraform.io/docs/providers/aws/index.html. This is an acceptable fix but it needs to be documented.

For anyone interested, this still hasn’t been documented (and I just lost a lot of time because of the elusive AWS_SDK_LOAD_CONFIG), so I’ve opened a couple of PRs documenting it:

https://github.com/hashicorp/terraform/pull/21122 https://github.com/terraform-providers/terraform-provider-aws/pull/8451

Upstream PR has been merged and will release with Terraform core 0.11.8.

profile is not working for me.

provider "aws" {
    region                = "us-east-2"
    profile                = "dev"
    shared_credentials_file = "~/.aws/credentials"
}
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

Error refreshing state: 1 error(s) occurred:

* provider.aws: No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

Same result with:

export AWS_DEFAULT_PROFILE=dev
export AWS_PROFILE=dev

My environment:

$ terraform -version
Terraform v0.9.11

OS = macOS 10.12

The issue is that I’m already using AWS_PROFILE with Packer and boto3 and it works perfectly. To use Terraform I need to unset AWS_PROFILE AND add a profile in the Terraform provider config. This needs to be fixed ASAP - pick one of the other because this is overcomplicating the whole thing.

Sorry for the confusion on my end (and the noise). I realized I was missing something crucial. The initial state config and the subsequent terraform run are actually separate credentials. Profiles was working as expected the entire time, but I could not make the initial connection the state bucket with the default profile.

It seems like we can’t reproduce this issue. To help the maintainers find the actionable issues in the tracker, I’m going to close this out, but if anyone is still experiencing this and can either supply a reproduction or logs, feel free to reply below or open a new issue. Thanks!

as @kjenney said “AWS_SDK_LOAD_CONFIG needs to be set for AWS_PROFILE”