origin: Failing initialization of containers when creating statefulset zookeeper
When creating statefulset zookeeper from origin/examples/statefulsets/zookeeper/ a first pod fails to start and it is stuck on init state. I assume it fails on init-containers when installing zookeeper.
Version
# openshift version
openshift v1.4.1
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0
Steps To Reproduce
- oc new-project zookeeper
- oc create -f volume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvc-datadir-zoo-0
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
- oc create -f zookeeper.yaml (without
volume.alpha.kubernetes.io/storage-class)
Current Result
Container is stuck at: container “zk” in pod “zoo-0” is waiting to start: PodInitializing
in logs I can see:
installing config scripts into /work-dir
installing zookeeper-3.5.0-alpha into /opt
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
mv: cannot stat '/opt/zookeeper-3.5.0-alpha': No such file or directory
cp: cannot stat '/opt/zookeeper/conf/zoo_sample.cfg': No such file or directory
zookeeper-3.5.0-alpha supports dynamic reconfiguration, enabling it
/install.sh: line 66: /opt/zookeeper/conf/zoo.cfg: No such file or directory
/install.sh: line 67: /opt/zookeeper/conf/zoo.cfg: No such file or directory
copying nc into /opt
2017/03/22 13:19:18 lookup zk on 192.168.122.254:53: read udp 172.17.0.3:36608->192.168.122.254:53: read: no route to host
2017/03/22 13:19:24 lookup zk on 192.168.122.254:53: read udp 172.17.0.3:47656->192.168.122.254:53: read: no route to host
2017/03/22 13:19:30 lookup zk on 192.168.122.254:53: read udp 172.17.0.3:34550->192.168.122.254:53: read: no route to host
2017/03/22 13:19:36 lookup zk on 192.168.122.254:53: read udp 172.17.0.3:43606->192.168.122.254:53: read: no route to host
2017/03/22 13:19:42 lookup zk on 192.168.122.254:53: read udp 172.17.0.3:49202->192.168.122.254:53: read: no route to host
2017/03/22 13:19:53 lookup zk on 192.168.122.254:53: read udp 172.17.0.3:55644->192.168.122.254:53: i/o timeout
# oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-datadir-zoo-0 20Gi RWO Retain Bound zookeeper/datadir-zoo-2 6m
pvc-datadir-zoo-1 20Gi RWO Retain Bound zookeeper/datadir-zoo-0 6m
pvc-datadir-zoo-2 20Gi RWO Retain Bound zookeeper/datadir-zoo-1 6m
# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
datadir-zoo-0 Bound pvc-datadir-zoo-1 20Gi RWO 8m
datadir-zoo-1 Bound pvc-datadir-zoo-2 20Gi RWO 8m
datadir-zoo-2 Bound pvc-datadir-zoo-0 20Gi RWO 8m
Result of #13168
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 25 (12 by maintainers)
oc describe pvcshould tell you what’s wrong, most probably alpha dynamic provisioning failed. If you don’t want dynamic provisioning, removevolume.alpha.kubernetes.io/storage-classannotation from the PVC template in your stateful set and OpenShift will try to find an existing PV. Alpha provisioning is kind of counter-intuitive, it always provisions a new volume even though there are existing PVs that could be used.@Tiboris Ok, I’ve re-read the issue and see that you’ve tried it. Could you try MongoDB example? If it will work then probably there is a bug that is related to using init-containers with StatefulSets.
I initially thought it’s an issue with hostPath mounts, I’ve tried applying this rule but that still hasn’t solved my problem. Still looking into it…