kubernetes: Fluentd pod failed to become ready on master

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: Fluentd pod failed to become ready on master after upgrade from 1.8->HEAD.

Logs from https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-new-master-upgrade-master/1043:

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:260
Oct 27 16:30:05.440: Error waiting for all pods to be running and ready: 1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 10m0s
POD                      NODE                 PHASE   GRACE CONDITIONS
fluentd-gcp-v2.0.9-4pp8p bootstrap-e2e-master Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-10-27 16:02:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-10-27 16:13:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-10-27 16:02:03 +0000 UTC Reason: Message:}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:198

Multiple CIs are hitting this:

Anything else we need to know?: From the timeline, possibly caused by the fluentd hostnetwork change:

@crassirostris @loburm

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 17 (12 by maintainers)

Commits related to this issue

Most upvoted comments

Thanks @dnardo for debugging this! @dnardo found that net.ipv4.conf.all.route_localnet on master node is 0, which should be 1.

I set it to 1, and it does fix the issue.

The iptables rules are fine:

e2e-test-ihmccreery-master ihmccreery # iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-N DOCKER
-N DOCKER-ISOLATION
-N KUBE-FIREWALL
-N KUBE-METADATA-SERVER
-A INPUT -j KUBE-FIREWALL
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -j ACCEPT
-A INPUT -p udp -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A FORWARD -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j KUBE-METADATA-SERVER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -p tcp -j ACCEPT
-A FORWARD -p udp -j ACCEPT
-A FORWARD -p icmp -j ACCEPT
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-METADATA-SERVER -j ACCEPT
-A KUBE-METADATA-SERVER -j DROP