terraform-provider-aws: dms-vpc-role is not configured properly when creating aws_dms_replication_instance
This is a similar (or the same) issue as terraform-providers/terraform-provider-aws#7748 which was closed.
Community Note
- Please vote on this issue by adding a đ reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave â+1â or âme tooâ comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform Version
Terraform v0.12.16
+ provider.aws v2.39.0
Affected Resource(s)
- aws_dms_replication_subnet_group
- aws_dms_replication_instance
Terraform Configuration Files
#Â Roles defined as per official documentation:
#Â https://www.terraform.io/docs/providers/aws/r/dms_replication_instance.html
# Database Migration Service requires the below IAM Roles to be created before
# replication instances can be created. See the DMS Documentation for
# additional information: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.APIRole.html
# * dms-vpc-role
# * dms-cloudwatch-logs-role
# * dms-access-for-endpoint
data "aws_iam_policy_document" "dms_assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
identifiers = ["dms.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_iam_role" "dms-access-for-endpoint" {
assume_role_policy = "${data.aws_iam_policy_document.dms_assume_role.json}"
name = "dms-access-for-endpoint"
}
resource "aws_iam_role_policy_attachment" "dms-access-for-endpoint-AmazonDMSRedshiftS3Role" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonDMSRedshiftS3Role"
role = "${aws_iam_role.dms-access-for-endpoint.name}"
}
resource "aws_iam_role" "dms-cloudwatch-logs-role" {
assume_role_policy = "${data.aws_iam_policy_document.dms_assume_role.json}"
name = "dms-cloudwatch-logs-role"
}
resource "aws_iam_role_policy_attachment" "dms-cloudwatch-logs-role-AmazonDMSCloudWatchLogsRole" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole"
role = "${aws_iam_role.dms-cloudwatch-logs-role.name}"
}
resource "aws_iam_role" "dms-vpc-role" {
assume_role_policy = "${data.aws_iam_policy_document.dms_assume_role.json}"
name = "dms-vpc-role"
}
resource "aws_iam_role_policy_attachment" "dms-vpc-role-AmazonDMSVPCManagementRole" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole"
role = "${aws_iam_role.dms-vpc-role.name}"
}
# Issue when creating aws_dms_replication_subnet_group
# (required for aws_dms_replication_instance)
resource "aws_dms_replication_subnet_group" "replication_subnet" {
replication_subnet_group_description = "Test replication subnet group"
replication_subnet_group_id = "test-dms-replication-subnet-group-tf"
subnet_ids = "${aws_subnet.database_subnet.*.id}"
# Explicit depends_on for required role
depends_on = ["aws_iam_role.dms-vpc-role"]
}
Debug Output
Error applying plan:
- Error: AccessDeniedFault: The IAM Role arn:aws:iam::xxxxxxxx:role/dms-vpc-role is not configured properly. status code: 400, request id: xxxxxxxx
- on dms.tf line xxx, in resource âaws_dms_replication_subnet_groupâ âreplication_subnetâ: xxx: resource âaws_dms_replication_subnet_groupâ âreplication_subnetâ {
Expected Behavior
On first terraform apply:
-
- Apply complete! Resources: X added, 0 changed, 0 destroyed.
Actual Behavior
On first terraform apply:
Error applying plan:
- Error: AccessDeniedFault: The IAM Role arn:aws:iam::xxxxxxxx:role/dms-vpc-role is not configured properly. status code: 400, request id: xxxxxxxx
- on dms.tf line xxx, in resource âaws_dms_replication_subnet_groupâ âreplication_subnetâ: xxx: resource âaws_dms_replication_subnet_groupâ âreplication_subnetâ {
On second terraform apply:
- Apply complete! Resources: X added, 0 changed, 0 destroyed.
Steps to Reproduce
terraform apply
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 56
- Comments: 21 (3 by maintainers)
My workaround is to
depend_onthe attachment, rather than the role, and add a sleep. I think it takes the IAM change some time to propagate through so that DMS picks up that you have the permissions.Just for additional info, running with
-parallelism=1also solves this issue.I can confirm that
depends_onworkaround does work, probably putting that in the docs is an option?depends_on + sleep worked for me đ