kubernetes: kubectl run applies bad default CPU requirements

I was trying to run a docker image using kubectl run

I saw this kind of failure:

fit failure on node (gke-getuser-1cf9f3b1-node-z76p): Node didn't have enough resource: CPU, requested: 100, used: 920, capacity: 1000

run requested 100 CPU

    Requests:
      cpu:  100m

No pods were running on that node (a GCE g1-small node). I expected my container to be able to run.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 3
  • Comments: 25 (10 by maintainers)

Commits related to this issue

Most upvoted comments

I have exactly the same problem. It happened on n1-standard-1 and it happens on g1-small.

First I’m deploying a Mongo container which really does nothing at all on its own. Then I deploy a Meteor container, and 2 out of 3 times the problem is: Node didn't have enough resource: CPU, requested: 100, used: 920, capacity: 1000

I really don’t think an idle Mongo DB is using 920 CPU.

The problem then seems to stick, for a specific node. Retrying on that node doesn’t work in a usable time frame. If I delete the two ReplicationControllers and delete the node and then create the ReplicationControllers again once a new node is ready, after a few iterations of this, both containers settle on the same node. Then everything works fine until I have to re-create one of the containers and problems start again.

Adding more nodes to the cluster doesn’t really seem to help. GKE tries to place both containers on one node (which is perfectly reasonable, performance-wise) and if that node happens to have that problem, it will continue the fail loop.

Also, adding resource limits didn’t help either:

        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 500m
            memory: 500Mi

Running a second node did help, though. Should the tutorials say that you can’t just run a single node cluster?