kubernetes: skydns container exits repeatedly and new one gets created

Kubernetes version used: 0.16.2 skydns image: gcr.io/google_containers/skydns:2015-03-11-001 kube2sky image: gcr.io/google_containers/kube2sky:1.4

I am running this set up locally and able to run pods, services, replicationcontrollers. When I set up the dns by running the rc and service as available in the skydns-rc.yaml.yaml and skydns-svc.yaml, observed that skydns exits repeatedly and gets created again. Watching the logs of this container gives the following lines before exiting.

sudo docker logs -f f7c77646eff5 2015/05/10 17:09:16 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns/config) [759] 2015/05/10 17:09:16 skydns: ready for queries on kubernetes.local. for tcp://0.0.0.0:53 [rcache 0] 2015/05/10 17:09:16 skydns: ready for queries on kubernetes.local. for udp://0.0.0.0:53 [rcache 0] 2015/05/10 17:09:54 skydns: can not forward, name too short (less than 2 labels): foobar.' 2015/05/10 17:09:54 skydns: can not forward, name too short (less than 2 labels):foobar.’ 2015/05/10 17:09:54 skydns: can not forward, name too short (less than 2 labels): `foobar.’

Can this be considered normal behavior ? (albeit, the dns functionality works well)

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Comments: 32 (11 by maintainers)

Commits related to this issue

Most upvoted comments

I encounter the same pb as @biswars.

DNS resolution is fine, but the skydns containers is getting killed every 30s.

The problem seems to be related to the check/livenessProbe. By disabling it, the skydns container is not killed anymore.

        livenessProbe:
          exec:
            command:
            - /bin/sh
            - -c
            - nslookup kubernetes.default.kube.local localhost >/dev/null
          initialDelaySeconds: 30
          timeoutSeconds: 5

still investigating, while discovering kubernetes internals 😃