kops: Error attaching EBS volume to instance IncorrectState Status code 400
Hi guys, I have a reproducible issue with dynamic provisioning EBS volumes. Basically, on a fresh cluster any attempt at using a deployment to provision an EBS volume results in a failure to attach the volume (however I think that the volume gets attached to the node) Here is the error message from the dashboard:
[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "data-helm-elastic-elasticsearch-data-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "data-helm-elastic-elasticsearch-data-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "data-helm-elastic-elasticsearch-data-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "data-helm-elastic-elasticsearch-data-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "data-helm-elastic-elasticsearch-data-0", which is unexpected.]
Failed to attach volume "pvc-4f501f74-8490-11e7-89c0-0a47f1972d2a" on node "ip-10-230-69-212.us-east-2.compute.internal" with: Error attaching EBS volume "vol-072cca03da4e963e6" to instance "i-0418a3b1a5f0a0146": IncorrectState: vol-072cca03da4e963e6 is not 'available'. status code: 400, request id: 521efe2a-4298-4fe9-8a07-8ff4870c20c6
I installed the cluster like this in US-EAST-2 (Ohio):
cmd: kops create cluster --associate-public-ip=false --cloud=aws --bastion=false --dns-zone=redacted --kubernetes-version=1.6.2 --master-size=t2.medium --master-volume-size=60 --master-zones=us-east-2a,us-east-2b,us-east-2c --network-cidr=10.230.0.0/16 --networking=weave --node-count=2 --node-size=t2.large --node-volume-size=128 --target=direct --topology=private --zones=us-east-2a,us-east-2b,us-east-2c --ssh-public-key=~/.ssh/id_rsa.pub --name=redacted --state s3://k8s-us-east
next I did a helm init
then did a helm install of an elastic search where I reduced the values of the replicas down to 1 ( to save spaceon the test. I have attached the values.yaml file to this ticket.
helm install --name helm-elastic incubator/elasticsearch --namespace staging --debug -f values.yaml
the values.yaml file looks like this, only think I changed was the replicas down to 1
image:
repository: "jetstack/elasticsearch-pet"
tag: "2.4.0"
pullPolicy: "Always"
cluster:
name: "elasticsearch"
config:
client:
name: client
replicas: 1
serviceType: ClusterIP
heapSize: "128m"
antiAffinity: "soft"
resources:
limits:
cpu: "1"
memory: "512Mi"
requests:
cpu: "25m"
memory: "256Mi"
master:
name: master
replicas: 1
heapSize: "128m"
antiAffinity: "soft"
resources:
limits:
cpu: "1"
memory: "512Mi"
requests:
cpu: "25m"
memory: "256Mi"
data:
name: data
replicas: 1
heapSize: "1536m"
storage: "30Gi"
# storageClass: "ssd"
terminationGracePeriodSeconds: 3600
antiAffinity: "soft"
resources:
limits:
cpu: "1"
memory: "512Mi"
requests:
cpu: "25m"
memory: "256Mi"
You guys should be able to reproduce this failure pretty easily. I’ve done some googling and one indication of the issue could be that we don’t pass cloud provider=aws on the scheduler command line. I checked my scheduler command line and didn’t see it there and my searching of kops issues shows that it isn’t available via the api.
some Links: https://github.com/kubernetes/kubernetes/issues/45726 https://groups.google.com/forum/#!topic/kubernetes-users/VshrZGFOmbo
@chrislovecnm - if you could take a look at this I would be in your debt!
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 4
- Comments: 23 (19 by maintainers)
I tried to use a helm chart to install consul which uses a statefulset… Here are the events: