kata-containers: Kata VM with a CPU request >1 does not receive appropriate number of vCPUs if limit not set

Description of problem

Created a Kata container (in an OpenShift 4.7 pod) with a CPU resource request of 4 cores and no limit specified. The VM had only one vCPU assigned. If I set a resource limit of 4 cores in addition to the resource request, I get the requisite number of vCPUs.

Expected result

Receive a Kata container with a VM suitable for running a container requesting 4 cores (presumably at least 5 vCPU).

Actual result

Received a Kata VM with only one CPU assigned

Further information

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 158
model name      : Intel(R) Xeon(R) CPU E3-1280 v6 @ 3.90GHz
stepping        : 9
microcode       : 0xde
cpu MHz         : 3912.000
cache size      : 16384 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat umip md_clear arch_capabilities
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa srbds
bogomips        : 7824.00
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

sh-4.4# exit

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Comments: 17 (5 by maintainers)

Most upvoted comments

Wait, wait, I’m not proposing to make i work with k8s + CRI-O only, that’s not the case.

What I’m saying is that we should consider opening an issue against containerd, maybe having this as a standard annotation passed down to the runtime, and then come up with a solution that benefit both CRI runtimes (as a very first step, and the, maybe, move it one layer up and make it official.