kubernetes: Pod fail to fit host ports should stay pending rather than mark as failed?
cc @yujuhong @dashpole @Random-Liu @yguo0905 smells like node issue? or point me to the right sig 😃
senlu@senlu:~/work/src/k8s.io/test-infra/prow$ kubectl describe po 4fcd6aa3-1e4f-11e8-b987-0a580a6c0061 -n=test-pods
Name: 4fcd6aa3-1e4f-11e8-b987-0a580a6c0061
Namespace: test-pods
Node: gke-prow-pool-n1-highmem-8-81ce4395-gdf0/
Start Time: Fri, 02 Mar 2018 11:24:42 -0800
Labels: created-by-prow=true
event-GUID=49618e60-1e4f-11e8-9c0b-810aeeebc663
preset-service-account=true
prow.k8s.io/type=presubmit
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: memory request for container 4fcd6aa3-1e4f-11e8-b987-0a580a6c0061-0
prow.k8s.io/job=pull-kubernetes-verify
Status: Failed
Reason: PodFitsHostPorts
Message: Pod Predicate PodFitsHostPorts failed
IP:
Containers:
4fcd6aa3-1e4f-11e8-b987-0a580a6c0061-0:
Image: gcr.io/k8s-testimages/bootstrap:v20180215-b2a89850e
Port: 9999/TCP
Args:
--clean
--job=$(JOB_NAME)
--repo=k8s.io/$(REPO_NAME)=$(PULL_REFS)
--service-account=/etc/service-account/service-account.json
--upload=gs://kubernetes-jenkins/pr-logs
--timeout=75
Requests:
cpu: 4
memory: 1Gi
Environment:
DOCKER_IN_DOCKER_ENABLED: true
GOOGLE_APPLICATION_CREDENTIALS: /etc/service-account/service-account.json
BUILD_NUMBER: 80557
REPO_NAME: kubernetes
BUILD_ID: 80557
PROW_JOB_ID: 4fcd6aa3-1e4f-11e8-b987-0a580a6c0061
JOB_SPEC: {"type":"presubmit","job":"pull-kubernetes-verify","buildid":"80557","prowjobid":"4fcd6aa3-1e4f-11e8-b987-0a580a6c0061","refs":{"org":"kubernetes","repo":"kubernetes","base_ref":"master","base_sha":"ae1fc13aee81e66b9b74a5fb881ff3f90463ff4e","pulls":[{"number":60519,"author":"bsalamat","sha":"18fb7ec8e8d011a0208bac242ffde08ff2169348"}]}}
REPO_OWNER: kubernetes
PULL_BASE_REF: master
PULL_BASE_SHA: ae1fc13aee81e66b9b74a5fb881ff3f90463ff4e
PULL_REFS: master:ae1fc13aee81e66b9b74a5fb881ff3f90463ff4e,60519:18fb7ec8e8d011a0208bac242ffde08ff2169348
PULL_NUMBER: 60519
JOB_NAME: pull-kubernetes-verify
JOB_TYPE: presubmit
PULL_PULL_SHA: 18fb7ec8e8d011a0208bac242ffde08ff2169348
Mounts:
/docker-graph from docker-graph (rw)
/etc/service-account from service (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6nn9x (ro)
Volumes:
docker-graph:
Type: HostPath (bare host directory volume)
Path: /mnt/disks/ssd0/docker-graph
service:
Type: Secret (a volume populated by a Secret)
SecretName: service-account
Optional: false
default-token-6nn9x:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6nn9x
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m default-scheduler Successfully assigned 4fcd6aa3-1e4f-11e8-b987-0a580a6c0061 to gke-prow-pool-n1-highmem-8-81ce4395-gdf0
Warning PodFitsHostPorts 9m kubelet, gke-prow-pool-n1-highmem-8-81ce4395-gdf0 Predicate PodFitsHostPorts failed
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 15 (12 by maintainers)
My guess would be that it got scheduled, but kubelet rejected it due to port conflict on the node, which is still working as intended
I think that would be sig-scheduling, not sure this is a bug though