cluster-api-provider-aws: CAPA fails silently when the instance profile of machine is incorrectly specified
/kind bug
What steps did you take and what happened: Create a new cluster. Specify a different/wrong instance profile role for controlplane than whats available on the account. CAPA fails silently with no errors or events being generated.
What did you expect to happen: CAPA must log the failure and generate a warning/error event indicating the reason.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-provider-aws version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 30 (30 by maintainers)
Added the following log line to https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/master/pkg/cloud/services/ec2/instances.go#L190
s.scope.V(2).Info("Error running instance", "cause", fmt.Sprintf("%v", errors.Cause(err)), "isFailed", awserrors.IsFailedDependency(errors.Cause(err)))and it produces:
I1002 19:38:09.709996 1 instances.go:190] controllers/AWSMachine "msg"="Error running instance" "awsCluster"="cloudsdale" "awsMachine"="cloudsdale-controlplane-0" "cluster"="cloudsdale" "machine"="cloudsdale-controlplane-0" "namespace"="default" "cause"="InvalidParameterValue: Value (invalid-controllers.cluster-api-provider-aws.sigs.k8s.io) for parameter iamInstanceProfile.name is invalid. Invalid IAM Instance Profile name\n\tstatus code: 400, request id: d4fb95aa-f14c-4dc7-84b3-efb14c8fab2f" "isFailed"=falseso the error is not categorized as a dependency error.
And actually, I do see events being generated:
is it possible this issue got solved in the mean time?
Reproduces in v0.4.0. CAPA pod logs: