kata-containers: CRI-O tests not working with rust agent 2.0
Using the rust agent 2.0, we have a lot of failures on the CRI-O tests
10:39:04 not ok 2 ctr termination reason Completed
10:39:04 # (in test file ctr.bats, line 28)
10:39:04 # `[ "$status" -eq 0 ]' failed
10:39:04 # 0
10:39:04 # time="2020-06-05 15:33:19.571582826Z" level=debug msg="found valid runtime \"runc\" for runtime_path \"/usr/local/bin/kata-runtime\"" file="config/config.go:891"
10:39:04 # time="2020-06-05 15:33:19.571717230Z" level=debug msg="using hooks directory: /tmp/tmp.IM8zxn26aX/hooks" file="config/config.go:760"
10:39:04 # time="2020-06-05 15:33:19.571769332Z" level=info msg="using conmon executable: /usr/local/bin/conmon" file="config/config.go:823"
10:39:04 # time="2020-06-05 15:33:19.571806233Z" level=info msg="using pinns executable: /tmp/jenkins/workspace/kata-containers-2.0-tests-debian-PR/go/src/github.com/cri-o/cri-o/bin/pinns" file="config/config.go:823"
10:39:04 # time="2020-06-05 15:33:19.572069143Z" level=debug msg="cached value indicated that overlay is supported" file="overlay/overlay.go:173"
10:39:04 # time="2020-06-05 15:33:19.572150945Z" level=debug msg="cached value indicated that metacopy is not being used" file="overlay/overlay.go:207"
10:39:04 # time="2020-06-05 15:33:19.573712700Z" level=debug msg="cached value indicated that native-diff is usable" file="overlay/overlay.go:437"
10:39:04 # time="2020-06-05 15:33:19.573795903Z" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" file="overlay/overlay.go:265"
10:39:04 # time="2020-06-05 15:33:19.573863705Z" level=info msg="[graphdriver] using prior storage driver: overlay" file="drivers/driver.go:279"
10:39:04 # time="2020-06-05 15:33:19.579128689Z" level=debug msg="reading hooks from /tmp/tmp.IM8zxn26aX/hooks" file="hooks/read.go:65"
10:39:04 # time="2020-06-05 15:33:19.584405573Z" level=info msg="Found CNI network crionet (type=bridge) at /tmp/tmp.IM8zxn26aX/cni/net.d/10-crio.conf" file="ocicni/ocicni.go:321"
10:39:04 # time="2020-06-05 15:33:19.584471575Z" level=info msg="Update default CNI network name to crionet" file="ocicni/ocicni.go:375"
10:39:04 # time="2020-06-05 15:33:19.587269973Z" level=info msg="no seccomp profile specified, using the internal default" file="server/server.go:358"
10:39:04 # time="2020-06-05 15:33:19.587345975Z" level=info msg="installing default apparmor profile: crio-default-1.17.0-dev" file="server/server.go:365"
10:39:04 # time="2020-06-05 15:33:19.613414585Z" level=debug msg="Golang's threads limit set to 115200" file="server/server.go:267"
10:39:04 # time="2020-06-05 15:33:19.615485257Z" level=debug msg="sandboxes: []" file="server/server.go:450"
10:39:04 # time="2020-06-05 15:33:19.615909372Z" level=debug msg="registered SIGHUP watcher for file \"/tmp/tmp.IM8zxn26aX/crio.conf\"" file="server/server.go:645"
10:39:04 # time="2020-06-05 15:33:19.616246783Z" level=debug msg="monitoring \"/tmp/tmp.IM8zxn26aX/hooks\" for hooks" file="hooks/monitor.go:43"
10:39:04 # time="2020-06-05 15:33:20.487714679Z" level=debug msg="request: &StatusRequest{Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=13c2c7f1-2e97-4abe-892d-21f1be0a7d9a
10:39:04 # time="2020-06-05 15:33:20.487806082Z" level=debug msg="response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=13c2c7f1-2e97-4abe-892d-21f1be0a7d9a
10:39:04 # time="2020-06-05 15:33:20.498894469Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=53d583af-6c8f-445d-a14d-949d8e1716e4
10:39:04 # time="2020-06-05 15:33:20.499841302Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.503328524Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=53d583af-6c8f-445d-a14d-949d8e1716e4
10:39:04 # time="2020-06-05 15:33:20.517871231Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=460cf795-2e82-49c7-b7ae-fc6e5f045031
10:39:04 # time="2020-06-05 15:33:20.518063338Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.521165146Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=460cf795-2e82-49c7-b7ae-fc6e5f045031
10:39:04 # time="2020-06-05 15:33:20.536030464Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=7d4a1214-2d58-4f91-888c-a5d7fe5fcf53
10:39:04 # time="2020-06-05 15:33:20.536227671Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.537634920Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=7d4a1214-2d58-4f91-888c-a5d7fe5fcf53
10:39:04 # time="2020-06-05 15:33:20.547522165Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/oom,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=cbe89625-038c-46e4-9759-edccbb03949e
10:39:04 # time="2020-06-05 15:33:20.547824876Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/oom:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.548669005Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:259cb9ee8ccba33f36ea25dab0224b602790b3e982788b55fd95bd47b5202684,RepoTags:[quay.io/crio/oom:latest],RepoDigests:[quay.io/crio/oom@sha256:3f540a296d709c376e5f0476ab624b7f300fa2cbe119a5464a2e0e391986eae5],Size_:5973904,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=cbe89625-038c-46e4-9759-edccbb03949e
10:39:04 # time="2020-06-05 15:33:20.564039941Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/oom,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=4e7df3d5-0030-4ad0-a49b-e72fd3642bc8
10:39:04 # time="2020-06-05 15:33:20.564317951Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/oom:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.565339687Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:259cb9ee8ccba33f36ea25dab0224b602790b3e982788b55fd95bd47b5202684,RepoTags:[quay.io/crio/oom:latest],RepoDigests:[quay.io/crio/oom@sha256:3f540a296d709c376e5f0476ab624b7f300fa2cbe119a5464a2e0e391986eae5],Size_:5973904,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=4e7df3d5-0030-4ad0-a49b-e72fd3642bc8
10:39:04 # time="2020-06-05 15:33:20.576650781Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/stderr-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=b577edb2-9218-4bd1-9160-cd1cbcaa65c8
10:39:04 # time="2020-06-05 15:33:20.576882989Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/stderr-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.577768420Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:5501612c200f99317c33b7a02dfbe6e30a76deea821e0f115eb4a6ab7f2ef689,RepoTags:[quay.io/crio/stderr-test:latest],RepoDigests:[quay.io/crio/stderr-test@sha256:d551428befc4a6436e9db96e084e8d4da73bc4568d6db08072f14f40f639c868],Size_:5155772,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b577edb2-9218-4bd1-9160-cd1cbcaa65c8
10:39:04 # time="2020-06-05 15:33:20.593253560Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/stderr-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=5cc8078e-fb53-4101-bb7a-364708723c6f
10:39:04 # time="2020-06-05 15:33:20.593544070Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/stderr-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.594528505Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:5501612c200f99317c33b7a02dfbe6e30a76deea821e0f115eb4a6ab7f2ef689,RepoTags:[quay.io/crio/stderr-test:latest],RepoDigests:[quay.io/crio/stderr-test@sha256:d551428befc4a6436e9db96e084e8d4da73bc4568d6db08072f14f40f639c868],Size_:5155772,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=5cc8078e-fb53-4101-bb7a-364708723c6f
10:39:04 # time="2020-06-05 15:33:20.605178076Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/busybox,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=be815b89-cb15-44a2-8eba-bb6d9bdae3d6
10:39:04 # time="2020-06-05 15:33:20.605455986Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/busybox:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.606426920Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7,RepoTags:[quay.io/crio/busybox:latest],RepoDigests:[quay.io/crio/busybox@sha256:85f389fc5830ba4269d3b4b9a4e8dfd32d5c5b8d9dda0586a9a0468d6961e5d5],Size_:1365270,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=be815b89-cb15-44a2-8eba-bb6d9bdae3d6
10:39:04 # time="2020-06-05 15:33:20.617722614Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/busybox,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=58551ec8-ffc9-414e-9074-1987b4013152
10:39:04 # time="2020-06-05 15:33:20.617919420Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/busybox:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.618461639Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7,RepoTags:[quay.io/crio/busybox:latest],RepoDigests:[quay.io/crio/busybox@sha256:85f389fc5830ba4269d3b4b9a4e8dfd32d5c5b8d9dda0586a9a0468d6961e5d5],Size_:1365270,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=58551ec8-ffc9-414e-9074-1987b4013152
10:39:04 # time="2020-06-05 15:33:20.627884568Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/image-volume-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=1ac63827-8092-4619-986e-6ae113dc079c
10:39:04 # time="2020-06-05 15:33:20.628099675Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/image-volume-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.628765599Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:6aa3df42b4043d37070ae0fe51a1cbf71876c5d95d834d97940fac5e0b3006e1,RepoTags:[quay.io/crio/image-volume-test:latest],RepoDigests:[quay.io/crio/image-volume-test@sha256:98110701e9416f3db7a22cbe3476c76dcd3a2292001654b3014f781097035554],Size_:1299534,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=1ac63827-8092-4619-986e-6ae113dc079c
10:39:04 # time="2020-06-05 15:33:20.641336737Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/image-volume-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=f04006ae-778b-4b25-b050-09579c81e465
10:39:04 # time="2020-06-05 15:33:20.641523244Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]quay.io/crio/image-volume-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.642144765Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:6aa3df42b4043d37070ae0fe51a1cbf71876c5d95d834d97940fac5e0b3006e1,RepoTags:[quay.io/crio/image-volume-test:latest],RepoDigests:[quay.io/crio/image-volume-test@sha256:98110701e9416f3db7a22cbe3476c76dcd3a2292001654b3014f781097035554],Size_:1299534,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=f04006ae-778b-4b25-b050-09579c81e465
10:39:04 # time="2020-06-05 15:33:20.650803767Z" level=debug msg="request: &RunPodSandboxRequest{Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:podsandbox1,Uid:redhat-test-crio,Namespace:redhat.test.crio,Attempt:1,},Hostname:crictl_host,LogDirectory:,DnsConfig:&DNSConfig{Servers:[],Searches:[8.8.8.8],Options:[],},PortMappings:[]*PortMapping{},Labels:map[string]string{group: test,},Annotations:map[string]string{owner: hmeng,security.alpha.kubernetes.io/seccomp/pod: unconfined,},Linux:&LinuxPodSandboxConfig{CgroupParent:/Burstable/pod_123-456,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:&SELinuxOption{User:system_u,Role:system_r,Type:svirt_lxc_net_t,Level:s0:c4,c5,},RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},},RuntimeHandler:,}" file="go-grpc-middleware/chain.go:25" id=91d7c6fe-026b-4fb8-9158-c83cc798a560
10:39:04 # time="2020-06-05 15:33:20.650942772Z" level=info msg="attempting to run pod sandbox with infra container: //POD" file="server/sandbox_run_linux.go:52" id=91d7c6fe-026b-4fb8-9158-c83cc798a560
10:39:04 # time="2020-06-05 15:33:20.651026775Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.IM8zxn26aX/crio+/tmp/tmp.IM8zxn26aX/crio-run]k8s.gcr.io/pause:3.1\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.652068111Z" level=debug msg="exporting opaque data as blob \"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\"" file="storage/storage_image.go:159"
10:39:04 # time="2020-06-05 15:33:20.810069122Z" level=debug msg="created pod sandbox \"a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7\"" file="storage/runtime.go:281"
10:39:04 # time="2020-06-05 15:33:21.041137182Z" level=debug msg="pod sandbox \"a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7\" has work directory \"/tmp/tmp.IM8zxn26aX/crio/overlay-containers/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7/userdata\"" file="storage/runtime.go:321"
10:39:04 # time="2020-06-05 15:33:21.041305988Z" level=debug msg="pod sandbox \"a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7\" has run directory \"/tmp/tmp.IM8zxn26aX/crio-run/overlay-containers/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7/userdata\"" file="storage/runtime.go:331"
10:39:04 # time="2020-06-05 15:33:21.106262053Z" level=debug msg="overlay: mount_data=lowerdir=/tmp/tmp.IM8zxn26aX/crio/overlay/l/4OWBHNKKCZHPOTFPQ6NKSK5MNS,upperdir=/tmp/tmp.IM8zxn26aX/crio/overlay/8e30fd9a24b836aabc2e5f1da777def77aa92567f604da88b98982bcc905cbe7/diff,workdir=/tmp/tmp.IM8zxn26aX/crio/overlay/8e30fd9a24b836aabc2e5f1da777def77aa92567f604da88b98982bcc905cbe7/work" file="overlay/overlay.go:1002"
10:39:04 # time="2020-06-05 15:33:21.246680051Z" level=debug msg="mounted container \"a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7\" at \"/tmp/tmp.IM8zxn26aX/crio/overlay/8e30fd9a24b836aabc2e5f1da777def77aa92567f604da88b98982bcc905cbe7/merged\"" file="storage/runtime.go:426"
10:39:04 # time="2020-06-05 15:33:21.248945130Z" level=debug msg="running conmon: /usr/local/bin/conmon" args="[--syslog -c a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7 -n k8s_POD_podsandbox1_redhat.test.crio_redhat-test-crio_1 -u a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7 -r /usr/local/bin/kata-runtime -b /tmp/tmp.IM8zxn26aX/crio-run/overlay-containers/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7/userdata --persist-dir /tmp/tmp.IM8zxn26aX/crio/overlay-containers/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7/userdata -p /tmp/tmp.IM8zxn26aX/crio-run/overlay-containers/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7/userdata/pidfile -P /tmp/tmp.IM8zxn26aX/crio-run/overlay-containers/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7/userdata/conmon-pidfile -l /var/log/crio/pods/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7/a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7.log --exit-dir /tmp/tmp.IM8zxn26aX/containers/exits --socket-dir-path /tmp/tmp.IM8zxn26aX/containers --log-level debug --runtime-arg --root=/run/runc]" file="oci/runtime_oci.go:128"
10:39:04 # time="2020-06-05 15:33:21.249383845Z" level=debug msg="Running conmon under custom slice system.slice and unitName crio-conmon-a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7.scope" file="oci/oci_linux.go:66"
10:39:04 # time="2020-06-05 15:33:21.314782626Z" level=debug msg="Received container pid: -1" file="oci/runtime_oci.go:207"
10:39:04 # time="2020-06-05 15:33:21.314957932Z" level=error msg="Container creation error: file /usr/libexec/kata-containers/kata-proxy does not exist\n" file="oci/runtime_oci.go:210"
10:39:04 # time="2020-06-05 15:33:21.364409457Z" level=warning msg="unable to delete container a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7: `/usr/local/bin/kata-runtime --root /run/runc delete --force a87dacbb49511ec6256dc8379d0231ba0d840bd38bbc277e67a670c0c02f90b7` failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n (exit status 1)" file="oci/runtime_oci.go:182"
10:39:04 # time="2020-06-05 15:33:28.102952688Z" level=debug msg="response error: container create failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n" file="go-grpc-middleware/chain.go:25" id=91d7c6fe-026b-4fb8-9158-c83cc798a560
10:39:04 # time="2020-06-05T15:33:28Z" level=fatal msg="run pod sandbox failed: rpc error: code = Unknown desc = container create failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n"
10:39:04 # time="2020-06-05 15:33:28.117655701Z" level=debug msg="request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1a9ace87-a455-40fa-a67d-42db338e140c
10:39:04 # time="2020-06-05 15:33:28.117754604Z" level=debug msg="no filters were applied, returning full container list" file="server/container_list.go:59" id=1a9ace87-a455-40fa-a67d-42db338e140c
10:39:04 # time="2020-06-05 15:33:28.117820807Z" level=debug msg="response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=1a9ace87-a455-40fa-a67d-42db338e140c
10:39:04 # time="2020-06-05 15:33:28.130639654Z" level=debug msg="request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d74e9677-c972-4901-be71-24d0409c44ef
10:39:04 # time="2020-06-05 15:33:28.130722557Z" level=debug msg="response: &ListPodSandboxResponse{Items:[]*PodSandbox{},}" file="go-grpc-middleware/chain.go:25" id=d74e9677-c972-4901-be71-24d0409c44ef
10:39:04 # time="2020-06-05 15:33:28.135510924Z" level=debug msg="received signal" file="crio/main.go:48" signal=terminated
10:39:04 # time="2020-06-05 15:33:28.135571026Z" level=debug msg="Caught SIGTERM" file="crio/main.go:58"
10:39:04 # time="2020-06-05 15:33:28.135915138Z" level=debug msg="hook monitoring canceled: context canceled" file="hooks/monitor.go:60"
10:39:04 # time="2020-06-05 15:33:28.136022841Z" level=debug msg="closing exit monitor..." file="server/server.go:601"
10:39:04 # time="2020-06-05 15:33:28.136534759Z" level=debug msg="closed http server" file="crio/main.go:278"
10:39:04 # time="2020-06-05 15:33:28.233033125Z" level=debug msg="closed stream server" file="crio/main.go:308"
10:39:04 # time="2020-06-05 15:33:28.233162029Z" level=debug msg="closed monitors" file="crio/main.go:310"
10:39:04 # time="2020-06-05 15:33:28.233118828Z" level=debug msg="[graphdriver] trying provided driver \"overlay\"" file="drivers/driver.go:244"
10:39:04 # time="2020-06-05 15:33:28.233379037Z" level=debug msg="cached value indicated that overlay is supported" file="overlay/overlay.go:173"
10:39:04 # time="2020-06-05 15:33:28.233542743Z" level=debug msg="cached value indicated that metacopy is not being used" file="overlay/overlay.go:207"
10:39:04 # time="2020-06-05 15:33:28.233237132Z" level=debug msg="closed hook monitor" file="crio/main.go:313"
10:39:04 # time="2020-06-05 15:33:28.233969658Z" level=debug msg="closed main server" file="crio/main.go:318"
10:39:04 # 0
10:39:04 not ok 3 ctr termination reason Error
10:39:04 # (in test file ctr.bats, line 49)
10:39:04 # `[ "$status" -eq 0 ]' failed
10:39:04 # 0
10:39:04 # time="2020-06-05 15:33:19.365201727Z" level=debug msg="found valid runtime \"runc\" for runtime_path \"/usr/local/bin/kata-runtime\"" file="config/config.go:891"
10:39:04 # time="2020-06-05 15:33:19.365321332Z" level=debug msg="using hooks directory: /tmp/tmp.zdXSlNyufM/hooks" file="config/config.go:760"
10:39:04 # time="2020-06-05 15:33:19.365365933Z" level=info msg="using conmon executable: /usr/local/bin/conmon" file="config/config.go:823"
10:39:04 # time="2020-06-05 15:33:19.365434836Z" level=info msg="using pinns executable: /tmp/jenkins/workspace/kata-containers-2.0-tests-debian-PR/go/src/github.com/cri-o/cri-o/bin/pinns" file="config/config.go:823"
10:39:04 # time="2020-06-05 15:33:19.365801348Z" level=debug msg="cached value indicated that overlay is supported" file="overlay/overlay.go:173"
10:39:04 # time="2020-06-05 15:33:19.365916252Z" level=debug msg="cached value indicated that metacopy is not being used" file="overlay/overlay.go:207"
10:39:04 # time="2020-06-05 15:33:19.370829224Z" level=debug msg="cached value indicated that native-diff is usable" file="overlay/overlay.go:437"
10:39:04 # time="2020-06-05 15:33:19.370916527Z" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" file="overlay/overlay.go:265"
10:39:04 # time="2020-06-05 15:33:19.370973629Z" level=info msg="[graphdriver] using prior storage driver: overlay" file="drivers/driver.go:279"
10:39:04 # time="2020-06-05 15:33:19.372680288Z" level=debug msg="reading hooks from /tmp/tmp.zdXSlNyufM/hooks" file="hooks/read.go:65"
10:39:04 # time="2020-06-05 15:33:19.375305580Z" level=info msg="Found CNI network crionet (type=bridge) at /tmp/tmp.zdXSlNyufM/cni/net.d/10-crio.conf" file="ocicni/ocicni.go:321"
10:39:04 # time="2020-06-05 15:33:19.375391883Z" level=info msg="Update default CNI network name to crionet" file="ocicni/ocicni.go:375"
10:39:04 # time="2020-06-05 15:33:19.379698133Z" level=info msg="no seccomp profile specified, using the internal default" file="server/server.go:358"
10:39:04 # time="2020-06-05 15:33:19.379849338Z" level=info msg="installing default apparmor profile: crio-default-1.17.0-dev" file="server/server.go:365"
10:39:04 # time="2020-06-05 15:33:19.431621244Z" level=debug msg="Golang's threads limit set to 115200" file="server/server.go:267"
10:39:04 # time="2020-06-05 15:33:19.431992257Z" level=debug msg="sandboxes: []" file="server/server.go:450"
10:39:04 # time="2020-06-05 15:33:19.432277967Z" level=debug msg="registered SIGHUP watcher for file \"/tmp/tmp.zdXSlNyufM/crio.conf\"" file="server/server.go:645"
10:39:04 # time="2020-06-05 15:33:19.432624279Z" level=debug msg="monitoring \"/tmp/tmp.zdXSlNyufM/hooks\" for hooks" file="hooks/monitor.go:43"
10:39:04 # time="2020-06-05 15:33:20.303601957Z" level=debug msg="request: &StatusRequest{Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=b46415f9-3a2f-44b4-aa22-eb41bc18019b
10:39:04 # time="2020-06-05 15:33:20.303765863Z" level=debug msg="response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b46415f9-3a2f-44b4-aa22-eb41bc18019b
10:39:04 # time="2020-06-05 15:33:20.314493337Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=b9f74290-7d22-4cec-b2f5-2dbe2ce4242c
10:39:04 # time="2020-06-05 15:33:20.314988955Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.317396039Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b9f74290-7d22-4cec-b2f5-2dbe2ce4242c
10:39:04 # time="2020-06-05 15:33:20.328941841Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=b71061d2-196e-44bd-8465-c66301750045
10:39:04 # time="2020-06-05 15:33:20.329150249Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.330488695Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b71061d2-196e-44bd-8465-c66301750045
10:39:04 # time="2020-06-05 15:33:20.342310608Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=5fee3ffb-37e5-4f68-85c1-27dec531c0d3
10:39:04 # time="2020-06-05 15:33:20.342518415Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.343861062Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=5fee3ffb-37e5-4f68-85c1-27dec531c0d3
10:39:04 # time="2020-06-05 15:33:20.353370793Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/oom,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=6238dd2f-f481-4cb9-9351-63873d76f12d
10:39:04 # time="2020-06-05 15:33:20.353726706Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/oom:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.355194057Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:259cb9ee8ccba33f36ea25dab0224b602790b3e982788b55fd95bd47b5202684,RepoTags:[quay.io/crio/oom:latest],RepoDigests:[quay.io/crio/oom@sha256:3f540a296d709c376e5f0476ab624b7f300fa2cbe119a5464a2e0e391986eae5],Size_:5973904,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=6238dd2f-f481-4cb9-9351-63873d76f12d
10:39:04 # time="2020-06-05 15:33:20.365944132Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/oom,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=359cf4b1-ccdc-4851-acc6-b58babba7249
10:39:04 # time="2020-06-05 15:33:20.366145839Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/oom:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.367547688Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:259cb9ee8ccba33f36ea25dab0224b602790b3e982788b55fd95bd47b5202684,RepoTags:[quay.io/crio/oom:latest],RepoDigests:[quay.io/crio/oom@sha256:3f540a296d709c376e5f0476ab624b7f300fa2cbe119a5464a2e0e391986eae5],Size_:5973904,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=359cf4b1-ccdc-4851-acc6-b58babba7249
10:39:04 # time="2020-06-05 15:33:20.376765409Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/stderr-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=1da89499-cec1-47b4-a1e9-b60840a64f71
10:39:04 # time="2020-06-05 15:33:20.376979617Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/stderr-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.377707942Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:5501612c200f99317c33b7a02dfbe6e30a76deea821e0f115eb4a6ab7f2ef689,RepoTags:[quay.io/crio/stderr-test:latest],RepoDigests:[quay.io/crio/stderr-test@sha256:d551428befc4a6436e9db96e084e8d4da73bc4568d6db08072f14f40f639c868],Size_:5155772,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=1da89499-cec1-47b4-a1e9-b60840a64f71
10:39:04 # time="2020-06-05 15:33:20.388304812Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/stderr-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=d2101168-154c-4e7c-b21b-b267eae32b97
10:39:04 # time="2020-06-05 15:33:20.388518719Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/stderr-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.389221644Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:5501612c200f99317c33b7a02dfbe6e30a76deea821e0f115eb4a6ab7f2ef689,RepoTags:[quay.io/crio/stderr-test:latest],RepoDigests:[quay.io/crio/stderr-test@sha256:d551428befc4a6436e9db96e084e8d4da73bc4568d6db08072f14f40f639c868],Size_:5155772,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=d2101168-154c-4e7c-b21b-b267eae32b97
10:39:04 # time="2020-06-05 15:33:20.397235023Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/busybox,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=9fa3a1cd-20f8-4752-b1c1-510a3b2597d7
10:39:04 # time="2020-06-05 15:33:20.397408629Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/busybox:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.398116554Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7,RepoTags:[quay.io/crio/busybox:latest],RepoDigests:[quay.io/crio/busybox@sha256:85f389fc5830ba4269d3b4b9a4e8dfd32d5c5b8d9dda0586a9a0468d6961e5d5],Size_:1365270,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=9fa3a1cd-20f8-4752-b1c1-510a3b2597d7
10:39:04 # time="2020-06-05 15:33:20.408182005Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/busybox,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=059ea7e2-cd6b-45b4-8572-9cf10e4cfee6
10:39:04 # time="2020-06-05 15:33:20.408372512Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/busybox:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.408961732Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7,RepoTags:[quay.io/crio/busybox:latest],RepoDigests:[quay.io/crio/busybox@sha256:85f389fc5830ba4269d3b4b9a4e8dfd32d5c5b8d9dda0586a9a0468d6961e5d5],Size_:1365270,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=059ea7e2-cd6b-45b4-8572-9cf10e4cfee6
10:39:04 # time="2020-06-05 15:33:20.417016013Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/image-volume-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=e0084e9c-dcda-4e0c-bfc1-937edb658e93
10:39:04 # time="2020-06-05 15:33:20.417248221Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/image-volume-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.417985447Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:6aa3df42b4043d37070ae0fe51a1cbf71876c5d95d834d97940fac5e0b3006e1,RepoTags:[quay.io/crio/image-volume-test:latest],RepoDigests:[quay.io/crio/image-volume-test@sha256:98110701e9416f3db7a22cbe3476c76dcd3a2292001654b3014f781097035554],Size_:1299534,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e0084e9c-dcda-4e0c-bfc1-937edb658e93
10:39:04 # time="2020-06-05 15:33:20.429007831Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/image-volume-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=57f22993-97e4-4214-a7d7-5a32690af78b
10:39:04 # time="2020-06-05 15:33:20.429237139Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]quay.io/crio/image-volume-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.429989066Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:6aa3df42b4043d37070ae0fe51a1cbf71876c5d95d834d97940fac5e0b3006e1,RepoTags:[quay.io/crio/image-volume-test:latest],RepoDigests:[quay.io/crio/image-volume-test@sha256:98110701e9416f3db7a22cbe3476c76dcd3a2292001654b3014f781097035554],Size_:1299534,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=57f22993-97e4-4214-a7d7-5a32690af78b
10:39:04 # time="2020-06-05 15:33:20.441384063Z" level=debug msg="request: &RunPodSandboxRequest{Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:podsandbox1,Uid:redhat-test-crio,Namespace:redhat.test.crio,Attempt:1,},Hostname:crictl_host,LogDirectory:,DnsConfig:&DNSConfig{Servers:[],Searches:[8.8.8.8],Options:[],},PortMappings:[]*PortMapping{},Labels:map[string]string{group: test,},Annotations:map[string]string{owner: hmeng,security.alpha.kubernetes.io/seccomp/pod: unconfined,},Linux:&LinuxPodSandboxConfig{CgroupParent:/Burstable/pod_123-456,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:&SELinuxOption{User:system_u,Role:system_r,Type:svirt_lxc_net_t,Level:s0:c4,c5,},RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},},RuntimeHandler:,}" file="go-grpc-middleware/chain.go:25" id=9db6253c-e92a-4b2e-9709-b4510539034e
10:39:04 # time="2020-06-05 15:33:20.441529768Z" level=info msg="attempting to run pod sandbox with infra container: //POD" file="server/sandbox_run_linux.go:52" id=9db6253c-e92a-4b2e-9709-b4510539034e
10:39:04 # time="2020-06-05 15:33:20.441606971Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.zdXSlNyufM/crio+/tmp/tmp.zdXSlNyufM/crio-run]k8s.gcr.io/pause:3.1\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:33:20.442182491Z" level=debug msg="exporting opaque data as blob \"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\"" file="storage/storage_image.go:159"
10:39:04 # time="2020-06-05 15:33:20.585481089Z" level=debug msg="created pod sandbox \"3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1\"" file="storage/runtime.go:281"
10:39:04 # time="2020-06-05 15:33:20.713601658Z" level=debug msg="pod sandbox \"3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1\" has work directory \"/tmp/tmp.zdXSlNyufM/crio/overlay-containers/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1/userdata\"" file="storage/runtime.go:321"
10:39:04 # time="2020-06-05 15:33:20.713914769Z" level=debug msg="pod sandbox \"3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1\" has run directory \"/tmp/tmp.zdXSlNyufM/crio-run/overlay-containers/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1/userdata\"" file="storage/runtime.go:331"
10:39:04 # time="2020-06-05 15:33:20.782328055Z" level=debug msg="overlay: mount_data=lowerdir=/tmp/tmp.zdXSlNyufM/crio/overlay/l/6UUASIWIKYRRMOTHM2FEBJOYAD,upperdir=/tmp/tmp.zdXSlNyufM/crio/overlay/b155a07ef8fb0b34a7144a590b1cd528096893826e21929053430b342958d680/diff,workdir=/tmp/tmp.zdXSlNyufM/crio/overlay/b155a07ef8fb0b34a7144a590b1cd528096893826e21929053430b342958d680/work" file="overlay/overlay.go:1002"
10:39:04 # time="2020-06-05 15:33:20.829221890Z" level=debug msg="mounted container \"3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1\" at \"/tmp/tmp.zdXSlNyufM/crio/overlay/b155a07ef8fb0b34a7144a590b1cd528096893826e21929053430b342958d680/merged\"" file="storage/runtime.go:426"
10:39:04 # time="2020-06-05 15:33:20.830911249Z" level=debug msg="running conmon: /usr/local/bin/conmon" args="[--syslog -c 3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1 -n k8s_POD_podsandbox1_redhat.test.crio_redhat-test-crio_1 -u 3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1 -r /usr/local/bin/kata-runtime -b /tmp/tmp.zdXSlNyufM/crio-run/overlay-containers/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1/userdata --persist-dir /tmp/tmp.zdXSlNyufM/crio/overlay-containers/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1/userdata -p /tmp/tmp.zdXSlNyufM/crio-run/overlay-containers/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1/userdata/pidfile -P /tmp/tmp.zdXSlNyufM/crio-run/overlay-containers/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1/userdata/conmon-pidfile -l /var/log/crio/pods/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1/3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1.log --exit-dir /tmp/tmp.zdXSlNyufM/containers/exits --socket-dir-path /tmp/tmp.zdXSlNyufM/containers --log-level debug --runtime-arg --root=/run/runc]" file="oci/runtime_oci.go:128"
10:39:04 # time="2020-06-05 15:33:20.831635475Z" level=debug msg="Running conmon under custom slice system.slice and unitName crio-conmon-3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1.scope" file="oci/oci_linux.go:66"
10:39:04 # time="2020-06-05 15:33:21.218343762Z" level=debug msg="Received container pid: -1" file="oci/runtime_oci.go:207"
10:39:04 # time="2020-06-05 15:33:21.218464267Z" level=error msg="Container creation error: file /usr/libexec/kata-containers/kata-proxy does not exist\n" file="oci/runtime_oci.go:210"
10:39:04 # time="2020-06-05 15:33:21.276662396Z" level=warning msg="unable to delete container 3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1: `/usr/local/bin/kata-runtime --root /run/runc delete --force 3cd866716456a9c5907dc432e096d8f2d9d1fb69b270484385069807b17f48d1` failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n (exit status 1)" file="oci/runtime_oci.go:182"
10:39:04 # time="2020-06-05 15:33:28.102302165Z" level=debug msg="response error: container create failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n" file="go-grpc-middleware/chain.go:25" id=9db6253c-e92a-4b2e-9709-b4510539034e
10:39:04 # time="2020-06-05T15:33:28Z" level=fatal msg="run pod sandbox failed: rpc error: code = Unknown desc = container create failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n"
10:39:04 # time="2020-06-05 15:33:28.118973747Z" level=debug msg="request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f557c56a-fe5f-4511-8208-ed09059b6340
10:39:04 # time="2020-06-05 15:33:28.119048449Z" level=debug msg="no filters were applied, returning full container list" file="server/container_list.go:59" id=f557c56a-fe5f-4511-8208-ed09059b6340
10:39:04 # time="2020-06-05 15:33:28.119098251Z" level=debug msg="response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=f557c56a-fe5f-4511-8208-ed09059b6340
10:39:04 # time="2020-06-05 15:33:28.135192712Z" level=debug msg="request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ba64ab6b-3444-4aed-b2e9-8f82344e7200
10:39:04 # time="2020-06-05 15:33:28.135277415Z" level=debug msg="response: &ListPodSandboxResponse{Items:[]*PodSandbox{},}" file="go-grpc-middleware/chain.go:25" id=ba64ab6b-3444-4aed-b2e9-8f82344e7200
10:39:04 # time="2020-06-05 15:33:28.139145950Z" level=debug msg="received signal" file="crio/main.go:48" signal=terminated
10:39:04 # time="2020-06-05 15:33:28.139218753Z" level=debug msg="Caught SIGTERM" file="crio/main.go:58"
10:39:04 # time="2020-06-05 15:33:28.139405459Z" level=debug msg="hook monitoring canceled: context canceled" file="hooks/monitor.go:60"
10:39:04 # time="2020-06-05 15:33:28.139415460Z" level=debug msg="closed http server" file="crio/main.go:278"
10:39:04 # time="2020-06-05 15:33:28.139449961Z" level=debug msg="closing exit monitor..." file="server/server.go:601"
10:39:04 # time="2020-06-05 15:33:28.233033525Z" level=debug msg="closed stream server" file="crio/main.go:308"
10:39:04 # time="2020-06-05 15:33:28.233174730Z" level=debug msg="closed monitors" file="crio/main.go:310"
10:39:04 # time="2020-06-05 15:33:28.233244132Z" level=debug msg="closed hook monitor" file="crio/main.go:313"
10:39:04 # time="2020-06-05 15:33:28.233313035Z" level=debug msg="closed main server" file="crio/main.go:318"
10:39:04 # 0
10:39:04 not ok 4 ulimits
10:39:04 # (in test file ctr.bats, line 72)
10:39:04 # `[ "$status" -eq 0 ]' failed
10:39:04 # 0
10:39:04 # time="2020-06-05 15:38:54.479610908Z" level=debug msg="found valid runtime \"runc\" for runtime_path \"/usr/local/bin/kata-runtime\"" file="config/config.go:891"
10:39:04 # time="2020-06-05 15:38:54.479739313Z" level=debug msg="using hooks directory: /tmp/tmp.pWaJYEurJA/hooks" file="config/config.go:760"
10:39:04 # time="2020-06-05 15:38:54.479807115Z" level=info msg="using conmon executable: /usr/local/bin/conmon" file="config/config.go:823"
10:39:04 # time="2020-06-05 15:38:54.479859617Z" level=info msg="using pinns executable: /tmp/jenkins/workspace/kata-containers-2.0-tests-debian-PR/go/src/github.com/cri-o/cri-o/bin/pinns" file="config/config.go:823"
10:39:04 # time="2020-06-05 15:38:54.480118026Z" level=debug msg="cached value indicated that overlay is supported" file="overlay/overlay.go:173"
10:39:04 # time="2020-06-05 15:38:54.480199329Z" level=debug msg="cached value indicated that metacopy is not being used" file="overlay/overlay.go:207"
10:39:04 # time="2020-06-05 15:38:54.481655279Z" level=debug msg="cached value indicated that native-diff is usable" file="overlay/overlay.go:437"
10:39:04 # time="2020-06-05 15:38:54.481732982Z" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" file="overlay/overlay.go:265"
10:39:04 # time="2020-06-05 15:38:54.481796384Z" level=info msg="[graphdriver] using prior storage driver: overlay" file="drivers/driver.go:279"
10:39:04 # time="2020-06-05 15:38:54.482407505Z" level=debug msg="reading hooks from /tmp/tmp.pWaJYEurJA/hooks" file="hooks/read.go:65"
10:39:04 # time="2020-06-05 15:38:54.487998699Z" level=info msg="Found CNI network crionet (type=bridge) at /tmp/tmp.pWaJYEurJA/cni/net.d/10-crio.conf" file="ocicni/ocicni.go:321"
10:39:04 # time="2020-06-05 15:38:54.488073301Z" level=info msg="Update default CNI network name to crionet" file="ocicni/ocicni.go:375"
10:39:04 # time="2020-06-05 15:38:54.490966702Z" level=info msg="no seccomp profile specified, using the internal default" file="server/server.go:358"
10:39:04 # time="2020-06-05 15:38:54.491051104Z" level=info msg="installing default apparmor profile: crio-default-1.17.0-dev" file="server/server.go:365"
10:39:04 # time="2020-06-05 15:38:54.517073005Z" level=debug msg="Golang's threads limit set to 115200" file="server/server.go:267"
10:39:04 # time="2020-06-05 15:38:54.517378116Z" level=debug msg="sandboxes: []" file="server/server.go:450"
10:39:04 # time="2020-06-05 15:38:54.517556322Z" level=debug msg="registered SIGHUP watcher for file \"/tmp/tmp.pWaJYEurJA/crio.conf\"" file="server/server.go:645"
10:39:04 # time="2020-06-05 15:38:54.517844832Z" level=debug msg="monitoring \"/tmp/tmp.pWaJYEurJA/hooks\" for hooks" file="hooks/monitor.go:43"
10:39:04 # time="2020-06-05 15:38:55.417797791Z" level=debug msg="request: &StatusRequest{Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=e6d52c72-d0e5-48b4-9c65-4ca7b22162c9
10:39:04 # time="2020-06-05 15:38:55.417898495Z" level=debug msg="response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e6d52c72-d0e5-48b4-9c65-4ca7b22162c9
10:39:04 # time="2020-06-05 15:38:55.428436160Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=bf14aa93-6ce0-46a7-8b19-5463547580b1
10:39:04 # time="2020-06-05 15:38:55.428906976Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.431283458Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=bf14aa93-6ce0-46a7-8b19-5463547580b1
10:39:04 # time="2020-06-05 15:38:55.442231837Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=b1030954-81b7-4ff8-b432-54dfee2b1e8c
10:39:04 # time="2020-06-05 15:38:55.442840759Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.445080536Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b1030954-81b7-4ff8-b432-54dfee2b1e8c
10:39:04 # time="2020-06-05 15:38:55.454569065Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/redis:alpine,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=3d9c04fa-e25a-4eaa-8264-1266e916217b
10:39:04 # time="2020-06-05 15:38:55.454727770Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/redis:alpine\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.455962413Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:98bd7cfc43b8ef0ff130465e3d5427c0771002c2f35a6a9b62cb2d04602bed0a,RepoTags:[quay.io/crio/redis:alpine],RepoDigests:[quay.io/crio/redis@sha256:1780b5a5496189974b94eb2595d86731d7a0820e4beb8ea770974298a943ed55],Size_:28138628,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=3d9c04fa-e25a-4eaa-8264-1266e916217b
10:39:04 # time="2020-06-05 15:38:55.464347603Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/oom,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=465af0e8-e619-45a9-b2c1-d96201e8889d
10:39:04 # time="2020-06-05 15:38:55.464563011Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/oom:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.465309236Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:259cb9ee8ccba33f36ea25dab0224b602790b3e982788b55fd95bd47b5202684,RepoTags:[quay.io/crio/oom:latest],RepoDigests:[quay.io/crio/oom@sha256:3f540a296d709c376e5f0476ab624b7f300fa2cbe119a5464a2e0e391986eae5],Size_:5973904,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=465af0e8-e619-45a9-b2c1-d96201e8889d
10:39:04 # time="2020-06-05 15:38:55.476099810Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/oom,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=f8510281-e6d7-4599-b068-d52c737a0ccc
10:39:04 # time="2020-06-05 15:38:55.476293917Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/oom:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.477252050Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:259cb9ee8ccba33f36ea25dab0224b602790b3e982788b55fd95bd47b5202684,RepoTags:[quay.io/crio/oom:latest],RepoDigests:[quay.io/crio/oom@sha256:3f540a296d709c376e5f0476ab624b7f300fa2cbe119a5464a2e0e391986eae5],Size_:5973904,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=f8510281-e6d7-4599-b068-d52c737a0ccc
10:39:04 # time="2020-06-05 15:38:55.484545703Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/stderr-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=31e9a7a0-00cf-4ced-b58d-c4f1fbbb9382
10:39:04 # time="2020-06-05 15:38:55.484728509Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/stderr-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.485334730Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:5501612c200f99317c33b7a02dfbe6e30a76deea821e0f115eb4a6ab7f2ef689,RepoTags:[quay.io/crio/stderr-test:latest],RepoDigests:[quay.io/crio/stderr-test@sha256:d551428befc4a6436e9db96e084e8d4da73bc4568d6db08072f14f40f639c868],Size_:5155772,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=31e9a7a0-00cf-4ced-b58d-c4f1fbbb9382
10:39:04 # time="2020-06-05 15:38:55.495183071Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/stderr-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=c3734318-73f6-4f5b-baa9-a1dc293dcec9
10:39:04 # time="2020-06-05 15:38:55.495392878Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/stderr-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.496174405Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:5501612c200f99317c33b7a02dfbe6e30a76deea821e0f115eb4a6ab7f2ef689,RepoTags:[quay.io/crio/stderr-test:latest],RepoDigests:[quay.io/crio/stderr-test@sha256:d551428befc4a6436e9db96e084e8d4da73bc4568d6db08072f14f40f639c868],Size_:5155772,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=c3734318-73f6-4f5b-baa9-a1dc293dcec9
10:39:04 # time="2020-06-05 15:38:55.503453057Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/busybox,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=df126c1b-ffed-4c09-be11-550ec89c490d
10:39:04 # time="2020-06-05 15:38:55.503712866Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/busybox:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.504313187Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7,RepoTags:[quay.io/crio/busybox:latest],RepoDigests:[quay.io/crio/busybox@sha256:85f389fc5830ba4269d3b4b9a4e8dfd32d5c5b8d9dda0586a9a0468d6961e5d5],Size_:1365270,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=df126c1b-ffed-4c09-be11-550ec89c490d
10:39:04 # time="2020-06-05 15:38:55.513156393Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/busybox,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=e8021adc-c6a8-4899-98cb-90ace81cdfac
10:39:04 # time="2020-06-05 15:38:55.513325999Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/busybox:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.513839017Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7,RepoTags:[quay.io/crio/busybox:latest],RepoDigests:[quay.io/crio/busybox@sha256:85f389fc5830ba4269d3b4b9a4e8dfd32d5c5b8d9dda0586a9a0468d6961e5d5],Size_:1365270,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e8021adc-c6a8-4899-98cb-90ace81cdfac
10:39:04 # time="2020-06-05 15:38:55.520611451Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/image-volume-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=4f160774-4af6-4ea8-a060-aeb0159257c3
10:39:04 # time="2020-06-05 15:38:55.520798158Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/image-volume-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.521356877Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:6aa3df42b4043d37070ae0fe51a1cbf71876c5d95d834d97940fac5e0b3006e1,RepoTags:[quay.io/crio/image-volume-test:latest],RepoDigests:[quay.io/crio/image-volume-test@sha256:98110701e9416f3db7a22cbe3476c76dcd3a2292001654b3014f781097035554],Size_:1299534,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=4f160774-4af6-4ea8-a060-aeb0159257c3
10:39:04 # time="2020-06-05 15:38:55.533007280Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:quay.io/crio/image-volume-test,},Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=76a31aae-dcc8-4a27-b11b-2b1a9dbdc464
10:39:04 # time="2020-06-05 15:38:55.533218488Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]quay.io/crio/image-volume-test:latest\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.533819609Z" level=debug msg="response: &ImageStatusResponse{Image:&Image{Id:6aa3df42b4043d37070ae0fe51a1cbf71876c5d95d834d97940fac5e0b3006e1,RepoTags:[quay.io/crio/image-volume-test:latest],RepoDigests:[quay.io/crio/image-volume-test@sha256:98110701e9416f3db7a22cbe3476c76dcd3a2292001654b3014f781097035554],Size_:1299534,Uid:nil,Username:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=76a31aae-dcc8-4a27-b11b-2b1a9dbdc464
10:39:04 # time="2020-06-05 15:38:55.542336403Z" level=debug msg="request: &RunPodSandboxRequest{Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:podsandbox1,Uid:redhat-test-crio,Namespace:redhat.test.crio,Attempt:1,},Hostname:crictl_host,LogDirectory:,DnsConfig:&DNSConfig{Servers:[],Searches:[8.8.8.8],Options:[],},PortMappings:[]*PortMapping{},Labels:map[string]string{group: test,},Annotations:map[string]string{owner: hmeng,security.alpha.kubernetes.io/seccomp/pod: unconfined,},Linux:&LinuxPodSandboxConfig{CgroupParent:/Burstable/pod_123-456,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:&SELinuxOption{User:system_u,Role:system_r,Type:svirt_lxc_net_t,Level:s0:c4,c5,},RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},},RuntimeHandler:,}" file="go-grpc-middleware/chain.go:25" id=ce7a0f8a-cf4a-40d2-b4a3-a1cca3a2540a
10:39:04 # time="2020-06-05 15:38:55.542434107Z" level=info msg="attempting to run pod sandbox with infra container: //POD" file="server/sandbox_run_linux.go:52" id=ce7a0f8a-cf4a-40d2-b4a3-a1cca3a2540a
10:39:04 # time="2020-06-05 15:38:55.542524310Z" level=debug msg="parsed reference into \"[overlay@/tmp/tmp.pWaJYEurJA/crio+/tmp/tmp.pWaJYEurJA/crio-run]k8s.gcr.io/pause:3.1\"" file="storage/storage_transport.go:174"
10:39:04 # time="2020-06-05 15:38:55.542945425Z" level=debug msg="exporting opaque data as blob \"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\"" file="storage/storage_image.go:159"
10:39:04 # time="2020-06-05 15:38:55.743697375Z" level=debug msg="created pod sandbox \"b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d\"" file="storage/runtime.go:281"
10:39:04 # time="2020-06-05 15:38:57.659599010Z" level=debug msg="pod sandbox \"b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d\" has work directory \"/tmp/tmp.pWaJYEurJA/crio/overlay-containers/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d/userdata\"" file="storage/runtime.go:321"
10:39:04 # time="2020-06-05 15:38:57.659844018Z" level=debug msg="pod sandbox \"b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d\" has run directory \"/tmp/tmp.pWaJYEurJA/crio-run/overlay-containers/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d/userdata\"" file="storage/runtime.go:331"
10:39:04 # time="2020-06-05 15:38:58.338787326Z" level=debug msg="overlay: mount_data=lowerdir=/tmp/tmp.pWaJYEurJA/crio/overlay/l/YA4RO3PA2YU4V3JP6VO7GPDMDE,upperdir=/tmp/tmp.pWaJYEurJA/crio/overlay/11b6ad4757725a0ffec4ac3101789c8bb9ca13842d26a591c3d50d3273a02be0/diff,workdir=/tmp/tmp.pWaJYEurJA/crio/overlay/11b6ad4757725a0ffec4ac3101789c8bb9ca13842d26a591c3d50d3273a02be0/work" file="overlay/overlay.go:1002"
10:39:04 # time="2020-06-05 15:38:58.469341046Z" level=debug msg="mounted container \"b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d\" at \"/tmp/tmp.pWaJYEurJA/crio/overlay/11b6ad4757725a0ffec4ac3101789c8bb9ca13842d26a591c3d50d3273a02be0/merged\"" file="storage/runtime.go:426"
10:39:04 # time="2020-06-05 15:38:58.470870699Z" level=debug msg="running conmon: /usr/local/bin/conmon" args="[--syslog -c b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d -n k8s_POD_podsandbox1_redhat.test.crio_redhat-test-crio_1 -u b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d -r /usr/local/bin/kata-runtime -b /tmp/tmp.pWaJYEurJA/crio-run/overlay-containers/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d/userdata --persist-dir /tmp/tmp.pWaJYEurJA/crio/overlay-containers/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d/userdata -p /tmp/tmp.pWaJYEurJA/crio-run/overlay-containers/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d/userdata/pidfile -P /tmp/tmp.pWaJYEurJA/crio-run/overlay-containers/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d/userdata/conmon-pidfile -l /var/log/crio/pods/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d/b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d.log --exit-dir /tmp/tmp.pWaJYEurJA/containers/exits --socket-dir-path /tmp/tmp.pWaJYEurJA/containers --log-level debug --runtime-arg --root=/run/runc]" file="oci/runtime_oci.go:128"
10:39:04 # time="2020-06-05 15:38:58.471318014Z" level=debug msg="Running conmon under custom slice system.slice and unitName crio-conmon-b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d.scope" file="oci/oci_linux.go:66"
10:39:04 # time="2020-06-05 15:38:58.523338315Z" level=debug msg="Received container pid: -1" file="oci/runtime_oci.go:207"
10:39:04 # time="2020-06-05 15:38:58.523460620Z" level=error msg="Container creation error: file /usr/libexec/kata-containers/kata-proxy does not exist\n" file="oci/runtime_oci.go:210"
10:39:04 # time="2020-06-05 15:38:58.569597717Z" level=warning msg="unable to delete container b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d: `/usr/local/bin/kata-runtime --root /run/runc delete --force b193e2280d64581f5631604f061eafed6dbcb92225de94127f6c62b7d8dbc17d` failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n (exit status 1)" file="oci/runtime_oci.go:182"
10:39:04 # time="2020-06-05 15:39:00.502955556Z" level=debug msg="response error: container create failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n" file="go-grpc-middleware/chain.go:25" id=ce7a0f8a-cf4a-40d2-b4a3-a1cca3a2540a
10:39:04 # time="2020-06-05T15:39:00Z" level=fatal msg="run pod sandbox failed: rpc error: code = Unknown desc = container create failed: file /usr/libexec/kata-containers/kata-proxy does not exist\n"
10:39:04 # time="2020-06-05 15:39:00.529028759Z" level=debug msg="request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f145f72b-0da4-402c-91f4-87fe2bdcd045
10:39:04 # time="2020-06-05 15:39:00.529134863Z" level=debug msg="no filters were applied, returning full container list" file="server/container_list.go:59" id=f145f72b-0da4-402c-91f4-87fe2bdcd045
10:39:04 # time="2020-06-05 15:39:00.529201065Z" level=debug msg="response: &ListContainersResponse{Containers:[]*Container{},}" file="go-grpc-middleware/chain.go:25" id=f145f72b-0da4-402c-91f4-87fe2bdcd045
10:39:04 # time="2020-06-05 15:39:00.540909370Z" level=debug msg="request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=166e8737-3d90-40ed-a5a3-b186553b0e17
10:39:04 # time="2020-06-05 15:39:00.541015274Z" level=debug msg="response: &ListPodSandboxResponse{Items:[]*PodSandbox{},}" file="go-grpc-middleware/chain.go:25" id=166e8737-3d90-40ed-a5a3-b186553b0e17
10:39:04 # time="2020-06-05 15:39:00.544739703Z" level=debug msg="received signal" file="crio/main.go:48" signal=terminated
10:39:04 # time="2020-06-05 15:39:00.544816606Z" level=debug msg="Caught SIGTERM" file="crio/main.go:58"
10:39:04 # time="2020-06-05 15:39:00.545066014Z" level=debug msg="hook monitoring canceled: context canceled" file="hooks/monitor.go:60"
10:39:04 # time="2020-06-05 15:39:00.545071814Z" level=debug msg="closed http server" file="crio/main.go:278"
10:39:04 # time="2020-06-05 15:39:00.545077615Z" level=debug msg="closing exit monitor..." file="server/server.go:601"
10:39:04 # time="2020-06-05 15:39:00.629601241Z" level=debug msg="closed stream server" file="crio/main.go:308"
10:39:04 # time="2020-06-05 15:39:00.629761747Z" level=debug msg="closed monitors" file="crio/main.go:310"
10:39:04 # time="2020-06-05 15:39:00.629832849Z" level=debug msg="closed hook monitor" file="crio/main.go:313"
10:39:04 # time="2020-06-05 15:39:00.629912152Z" level=debug msg="closed main server" file="crio/main.go:318"
Here are the logs
Show <tt>kata-collect-data.sh</tt> details
Meta details
Running kata-collect-data.sh version 1.11.0-rc0 (commit 43db1284e9d4e70f943aaaa1b1a477277c9081d6) at 2020-06-05.15:39:15.011966765+0000.
Runtime is /usr/local/bin/kata-runtime.
kata-env
Output of “/usr/local/bin/kata-runtime kata-env”:
file /usr/libexec/kata-containers/kata-proxy does not exist
Runtime config files
Runtime default config files
/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml
Runtime config file contents
Config file /etc/kata-containers/configuration.toml not found
Output of “cat "/usr/share/defaults/kata-containers/configuration.toml"”:
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.qemu]
path = "/usr/bin/qemu-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = " agent.log=debug"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to 1
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-9p (default)
# - virtio-fs
shared_fs = "virtio-9p"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/bin/virtiofsd"
# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024
# Extra args for virtiofsd daemon
#
# Format example:
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []
# Cache mode:
#
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = false
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
use_vsock = true
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
#pcie_root_port = 2
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = true
[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = true
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
KSM throttler
version
Output of " --version":
/usr/local/bin/kata-collect-data.sh: line 178: --version: command not found
systemd service
Image details
No image
Initrd details
No initrd
Logfiles
Runtime logs
Recent runtime problems found in system journal:
time="2020-06-05T15:35:42.042574793Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=58924 source=runtime
time="2020-06-05T15:35:42.044356453Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=58925 source=runtime
time="2020-06-05T15:36:02.684079727Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=59942 source=runtime
time="2020-06-05T15:36:02.72448759Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=59953 source=runtime
time="2020-06-05T15:36:09.712316256Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=60206 source=runtime
time="2020-06-05T15:36:09.759916062Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=60216 source=runtime
time="2020-06-05T15:36:15.055527433Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=60692 source=runtime
time="2020-06-05T15:36:15.16038297Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=60711 source=runtime
time="2020-06-05T15:36:15.19298957Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=60720 source=runtime
time="2020-06-05T15:36:15.242074427Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=60731 source=runtime
time="2020-06-05T15:36:34.842725842Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=61679 source=runtime
time="2020-06-05T15:36:34.909993412Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=61696 source=runtime
time="2020-06-05T15:36:39.736092442Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=61894 source=runtime
time="2020-06-05T15:36:39.786745351Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=61912 source=runtime
time="2020-06-05T15:36:49.160275908Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=62492 source=runtime
time="2020-06-05T15:36:49.177944505Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=62496 source=runtime
time="2020-06-05T15:36:49.218854185Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=62512 source=runtime
time="2020-06-05T15:36:49.232190635Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=62521 source=runtime
time="2020-06-05T15:37:14.058384114Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=63482 source=runtime
time="2020-06-05T15:37:14.113189738Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=63506 source=runtime
time="2020-06-05T15:37:21.383710908Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=63744 source=runtime
time="2020-06-05T15:37:21.490664667Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=63765 source=runtime
time="2020-06-05T15:37:27.675336498Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=64276 source=runtime
time="2020-06-05T15:37:27.727212825Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=64294 source=runtime
time="2020-06-05T15:37:28.240075393Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=64305 source=runtime
time="2020-06-05T15:37:28.278511073Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=64315 source=runtime
time="2020-06-05T15:37:46.212922846Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=65426 source=runtime
time="2020-06-05T15:37:46.249169952Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=65436 source=runtime
time="2020-06-05T15:37:50.49477075Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=65475 source=runtime
time="2020-06-05T15:37:50.543624375Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=65494 source=runtime
time="2020-06-05T15:37:58.597722723Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=66072 source=runtime
time="2020-06-05T15:37:58.635157769Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=66081 source=runtime
time="2020-06-05T15:38:05.71703456Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=66332 source=runtime
time="2020-06-05T15:38:05.764011823Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=66343 source=runtime
time="2020-06-05T15:38:18.958337746Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=67170 source=runtime
time="2020-06-05T15:38:18.994815709Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=67188 source=runtime
time="2020-06-05T15:38:21.718136899Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=67230 source=runtime
time="2020-06-05T15:38:21.767858921Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=67240 source=runtime
time="2020-06-05T15:38:32.200307626Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=67778 source=runtime
time="2020-06-05T15:38:32.258246232Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=67795 source=runtime
time="2020-06-05T15:38:38.892833243Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=68186 source=runtime
time="2020-06-05T15:38:38.936123642Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=68203 source=runtime
time="2020-06-05T15:38:55.724502811Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=69325 source=runtime
time="2020-06-05T15:38:55.763364156Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=69335 source=runtime
time="2020-06-05T15:38:55.827640782Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=69346 source=runtime
time="2020-06-05T15:38:55.863270815Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=69356 source=runtime
time="2020-06-05T15:38:58.521256543Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=69368 source=runtime
time="2020-06-05T15:38:58.566814421Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=69378 source=runtime
time="2020-06-05T15:39:03.835776149Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=create name=kata-runtime pid=69751 source=runtime
time="2020-06-05T15:39:03.872503421Z" level=error msg="file /usr/libexec/kata-containers/kata-proxy does not exist" arch=amd64 command=delete name=kata-runtime pid=69761 source=runtime
Proxy logs
No recent proxy problems found in system journal.
Shim logs
No recent shim problems found in system journal.
Throttler logs
No recent throttler problems found in system journal.
Container manager details
Have docker
Docker
Output of “docker version”:
Client:
Version: 18.06.3-ce
API version: 1.38
Go version: go1.10.3
Git commit: d7080c1
Built: Wed Feb 20 02:28:55 2019
OS/Arch: linux/amd64
Experimental: false
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Output of “docker info”:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Output of “systemctl show docker”:
Type=notify
Restart=on-failure
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=infinity
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Fri 2020-06-05 15:16:12 UTC
ExecMainStartTimestampMonotonic=394921265
ExecMainExitTimestamp=Fri 2020-06-05 15:32:44 UTC
ExecMainExitTimestampMonotonic=1387217000
ExecMainPID=19080
ExecMainCode=1
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// ; ignore_errors=no ; start_time=[Fri 2020-06-05 15:16:12 UTC] ; stop_time=[Fri 2020-06-05 15:32:44 UTC] ; pid=19080 ; code=exited ; status=0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=[not set]
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=19207
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=64024
LimitSIGPENDINGSoft=64024
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=docker.service
Names=docker.service
Requires=docker.socket system.slice sysinit.target
Wants=network-online.target
WantedBy=multi-user.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=multi-user.target shutdown.target
After=docker.socket sysinit.target network-online.target systemd-journald.socket system.slice firewalld.service basic.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=inactive
SubState=dead
FragmentPath=/lib/systemd/system/docker.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2020-06-05 15:32:44 UTC
StateChangeTimestampMonotonic=1387217037
InactiveExitTimestamp=Fri 2020-06-05 15:16:12 UTC
InactiveExitTimestampMonotonic=394921596
ActiveEnterTimestamp=Fri 2020-06-05 15:16:15 UTC
ActiveEnterTimestampMonotonic=398701665
ActiveExitTimestamp=Fri 2020-06-05 15:32:43 UTC
ActiveExitTimestampMonotonic=1386212695
InactiveEnterTimestamp=Fri 2020-06-05 15:32:44 UTC
InactiveEnterTimestampMonotonic=1387217037
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2020-06-05 15:16:12 UTC
ConditionTimestampMonotonic=394919834
AssertTimestamp=Fri 2020-06-05 15:16:12 UTC
AssertTimestampMonotonic=394919835
Transient=no
Perpetual=no
StartLimitIntervalUSec=1min
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
FailureActionExitStatus=-1
SuccessAction=none
SuccessActionExitStatus=-1
InvocationID=4abdc991442a4a479cce07200935fa5c
CollectMode=inactive
Have kubectl
Kubernetes
Output of “kubectl version”:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Output of “kubectl config view”:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Output of “systemctl show kubelet”:
Type=simple
Restart=always
NotifyAccess=none
RestartUSec=10s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=exit-code
UID=[not set]
GID=[not set]
NRestarts=39
ExecMainStartTimestamp=Fri 2020-06-05 15:39:13 UTC
ExecMainStartTimestampMonotonic=1776739072
ExecMainExitTimestamp=Fri 2020-06-05 15:39:13 UTC
ExecMainExitTimestampMonotonic=1776806799
ExecMainPID=69837
ExecMainCode=1
ExecMainStatus=255
ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[Fri 2020-06-05 15:39:13 UTC] ; stop_time=[Fri 2020-06-05 15:39:13 UTC] ; pid=69837 ; code=exited ; status=255 }
Slice=system.slice
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=[not set]
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=no
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=19207
IPAccounting=no
Environment=[unprintable] [unprintable] KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
EnvironmentFiles=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes)
EnvironmentFiles=/etc/default/kubelet (ignore_errors=yes)
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=0
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=524288
LimitNOFILESoft=1024
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=64024
LimitNPROCSoft=64024
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=64024
LimitSIGPENDINGSoft=64024
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=control-group
KillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=kubelet.service
Names=kubelet.service
Requires=sysinit.target system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=multi-user.target shutdown.target
After=system.slice sysinit.target systemd-journald.socket basic.target
Documentation=https://kubernetes.io/docs/home/
Description=kubelet: The Kubernetes Node Agent
LoadState=loaded
ActiveState=activating
SubState=auto-restart
FragmentPath=/lib/systemd/system/kubelet.service
DropInPaths=/etc/systemd/system/kubelet.service.d/0-crio.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2020-06-05 15:39:13 UTC
StateChangeTimestampMonotonic=1776807757
InactiveExitTimestamp=Fri 2020-06-05 15:39:13 UTC
InactiveExitTimestampMonotonic=1776807757
ActiveEnterTimestamp=Fri 2020-06-05 15:39:13 UTC
ActiveEnterTimestampMonotonic=1776739353
ActiveExitTimestamp=Fri 2020-06-05 15:39:13 UTC
ActiveExitTimestampMonotonic=1776806989
InactiveEnterTimestamp=Fri 2020-06-05 15:39:13 UTC
InactiveEnterTimestampMonotonic=1776806989
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2020-06-05 15:39:13 UTC
ConditionTimestampMonotonic=1776737673
AssertTimestamp=Fri 2020-06-05 15:39:13 UTC
AssertTimestampMonotonic=1776737673
Transient=no
Perpetual=no
StartLimitIntervalUSec=0
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
FailureActionExitStatus=-1
SuccessAction=none
SuccessActionExitStatus=-1
InvocationID=dcddd998334846cf9807f23faa483c34
CollectMode=inactive
Have crio
crio
Output of “crio --version”:
crio version 1.17.0-dev
commit: 0eec454168e381e460b3d6de07bf50bfd9b0d082
Output of “systemctl show crio”:
Type=simple
Restart=on-failure
NotifyAccess=none
RestartUSec=5s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestampMonotonic=0
ExecMainExitTimestampMonotonic=0
ExecMainPID=0
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/local/bin/crio ; argv[]=/usr/local/bin/crio --log-level debug ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=[not set]
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=no
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=19207
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=0
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=524288
LimitNOFILESoft=1024
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=64024
LimitNPROCSoft=64024
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=64024
LimitSIGPENDINGSoft=64024
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=control-group
KillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=crio.service
Names=crio.service
Requires=sysinit.target system.slice
Conflicts=shutdown.target
Before=shutdown.target
After=system.slice systemd-journald.socket basic.target sysinit.target
Documentation=https://github.com/cri-o/cri-o
Description=CRI-O daemon
LoadState=loaded
ActiveState=inactive
SubState=dead
FragmentPath=/etc/systemd/system/crio.service
UnitFileState=disabled
UnitFilePreset=enabled
StateChangeTimestampMonotonic=0
InactiveExitTimestampMonotonic=0
ActiveEnterTimestampMonotonic=0
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=no
AssertResult=no
ConditionTimestampMonotonic=0
AssertTimestampMonotonic=0
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
FailureActionExitStatus=-1
SuccessAction=none
SuccessActionExitStatus=-1
CollectMode=inactive
Output of “cat /etc/crio/crio.conf”:
# The CRI-O configuration file specifies all of the available configuration
# options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
# daemon, but in a TOML format that can be more easily modified and versioned.
#
# Please refer to crio.conf(5) for details of all configuration options.
# CRI-O supports partial configuration reload during runtime, which can be
# done by sending SIGHUP to the running process. Currently supported options
# are explicitly mentioned with: 'This option supports live configuration
# reload'.
# CRI-O reads its storage defaults from the containers-storage.conf(5) file
# located at /etc/containers/storage.conf. Modify this storage configuration if
# you want to change the system's defaults. If you want to modify storage just
# for CRI-O, you can change the storage configuration options here.
[crio]
# Path to the "root directory". CRI-O stores all of its data, including
# containers images, in this directory.
#root = "/home/jenkins/.local/share/containers/storage"
# Path to the "run directory". CRI-O stores all of its state in this directory.
#runroot = "/run/user/1000/containers"
# Storage driver used to manage the storage of images and containers. Please
# refer to containers-storage.conf(5) to see all available storage drivers.
#storage_driver = "vfs"
# List to pass options to the storage driver. Please refer to
# containers-storage.conf(5) to see all available storage options.
#storage_option = [
#]
# The default log directory where all logs will go unless directly specified by
# the kubelet. The log directory specified must be an absolute directory.
log_dir = "/var/log/crio/pods"
# Location for CRI-O to lay down the version file
version_file = "/var/run/crio/version"
# The crio.api table contains settings for the kubelet/gRPC interface.
[crio.api]
# Path to AF_LOCAL socket on which CRI-O will listen.
listen = "/var/run/crio/crio.sock"
# IP address on which the stream server will listen.
stream_address = "127.0.0.1"
# The port on which the stream server will listen. If the port is set to "0", then
# CRI-O will allocate a random free port number.
stream_port = "0"
# Enable encrypted TLS transport of the stream server.
stream_enable_tls = false
# Path to the x509 certificate file used to serve the encrypted stream. This
# file can change, and CRI-O will automatically pick up the changes within 5
# minutes.
stream_tls_cert = ""
# Path to the key file used to serve the encrypted stream. This file can
# change and CRI-O will automatically pick up the changes within 5 minutes.
stream_tls_key = ""
# Path to the x509 CA(s) file used to verify and authenticate client
# communication with the encrypted stream. This file can change and CRI-O will
# automatically pick up the changes within 5 minutes.
stream_tls_ca = ""
# Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
grpc_max_send_msg_size = 16777216
# Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
grpc_max_recv_msg_size = 16777216
# The crio.runtime table contains settings pertaining to the OCI runtime used
# and options for how to set up and manage the OCI runtime.
[crio.runtime]
manage_network_ns_lifecycle = true
# A list of ulimits to be set in containers by default, specified as
# "<ulimit name>=<soft limit>:<hard limit>", for example:
# "nofile=1024:2048"
# If nothing is set here, settings will be inherited from the CRI-O daemon
#default_ulimits = [
#]
# default_runtime is the _name_ of the OCI runtime to be used as the default.
# The name is matched against the runtimes map below.
default_runtime = "runc"
# If true, the runtime will not use pivot_root, but instead use MS_MOVE.
no_pivot = false
# decryption_keys_path is the path where the keys required for
# image decryption are stored.
decryption_keys_path = "/etc/crio/keys/"
# Path to the conmon binary, used for monitoring the OCI runtime.
# Will be searched for using $PATH if empty.
conmon = ""
# Cgroup setting for conmon
conmon_cgroup = "system.slice"
# Environment variable list for the conmon process, used for passing necessary
# environment variables to conmon or the runtime.
conmon_env = [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
]
# If true, SELinux will be used for pod separation on the host.
selinux = false
# Path to the seccomp.json profile which is used as the default seccomp profile
# for the runtime. If not specified, then the internal default seccomp profile
# will be used.
seccomp_profile = ""
# Used to change the name of the default AppArmor profile of CRI-O. The default
# profile name is "crio-default-" followed by the version string of CRI-O. This
# profile only takes effect if the user does not specify a profile via the
# Kubernetes Pod's metadata annotation.
apparmor_profile = "crio-default-1.17.0-dev"
# Cgroup management implementation used for the runtime.
cgroup_manager = "cgroupfs"
# List of default capabilities for containers. If it is empty or commented out,
# only the capabilities defined in the containers json file by the user/kube
# will be added.
default_capabilities = [
"CHOWN",
"DAC_OVERRIDE",
"FSETID",
"FOWNER",
"NET_RAW",
"SETGID",
"SETUID",
"SETPCAP",
"NET_BIND_SERVICE",
"SYS_CHROOT",
"KILL",
]
# List of default sysctls. If it is empty or commented out, only the sysctls
# defined in the container json file by the user/kube will be added.
default_sysctls = [
]
# List of additional devices. specified as
# "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
#If it is empty or commented out, only the devices
# defined in the container json file by the user/kube will be added.
additional_devices = [
]
# Path to OCI hooks directories for automatically executed hooks. If one of the
# directories does not exist, then CRI-O will automatically skip them.
hooks_dir = [
"/usr/share/containers/oci/hooks.d",
]
# List of default mounts for each container. **Deprecated:** this option will
# be removed in future versions in favor of default_mounts_file.
default_mounts = [
]
# Path to the file specifying the defaults mounts for each container. The
# format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
# its default mounts from the following two files:
#
# 1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
# override file, where users can either add in their own default mounts, or
# override the default mounts shipped with the package.
#
# 2) /usr/share/containers/mounts.conf: This is the default file read for
# mounts. If you want CRI-O to read from a different, specific mounts file,
# you can change the default_mounts_file. Note, if this is done, CRI-O will
# only add mounts it finds in this file.
#
#default_mounts_file = ""
# Maximum number of processes allowed in a container.
pids_limit = 1024
# Maximum sized allowed for the container log file. Negative numbers indicate
# that no size limit is imposed. If it is positive, it must be >= 8192 to
# match/exceed conmon's read buffer. The file is truncated and re-opened so the
# limit is never exceeded.
log_size_max = -1
# Whether container output should be logged to journald in addition to the kuberentes log file
log_to_journald = false
# Path to directory in which container exit files are written to by conmon.
container_exits_dir = "/var/run/crio/exits"
# Path to directory for container attach sockets.
container_attach_socket_dir = "/var/run/crio"
# The prefix to use for the source of the bind mounts.
bind_mount_prefix = ""
# If set to true, all containers will run in read-only mode.
read_only = false
# Changes the verbosity of the logs based on the level it is set to. Options
# are fatal, panic, error, warn, info, debug and trace. This option supports
# live configuration reload.
log_level = "info"
# Filter the log messages by the provided regular expression.
# This option supports live configuration reload.
log_filter = ""
# The UID mappings for the user namespace of each container. A range is
# specified in the form containerUID:HostUID:Size. Multiple ranges must be
# separated by comma.
uid_mappings = ""
# The GID mappings for the user namespace of each container. A range is
# specified in the form containerGID:HostGID:Size. Multiple ranges must be
# separated by comma.
gid_mappings = ""
# The minimal amount of time in seconds to wait before issuing a timeout
# regarding the proper termination of the container.
ctr_stop_timeout = 0
# **DEPRECATED** this option is being replaced by manage_ns_lifecycle, which is described below.
# #manage_network_ns_lifecycle = false
# manage_ns_lifecycle determines whether we pin and remove namespaces
# and manage their lifecycle
manage_ns_lifecycle = false
# The directory where the state of the managed namespaces gets tracked.
# Only used when manage_ns_lifecycle is true.
namespaces_dir = "/var/run/crio/ns"
# pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
pinns_path = ""
# The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
# The runtime to use is picked based on the runtime_handler provided by the CRI.
# If no runtime_handler is provided, the runtime will be picked based on the level
# of trust of the workload. Each entry in the table should follow the format:
#
#[crio.runtime.runtimes.runtime-handler]
# runtime_path = "/usr/local/bin/crio-runc"
# runtime_type = "oci"
# runtime_root = "/path/to/the/root"
#
# Where:
# - runtime-handler: name used to identify the runtime
# - runtime_path (optional, string): absolute path to the runtime executable in
# the host filesystem. If omitted, the runtime-handler identifier should match
# the runtime executable name, and the runtime executable should be placed
# in $PATH.
# - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
# omitted, an "oci" runtime is assumed.
# - runtime_root (optional, string): root directory for storage of containers
# state.
[crio.runtime.runtimes.runc]
runtime_path = "/usr/local/bin/crio-runc"
runtime_type = "oci"
runtime_root = "/run/runc"
[crio.runtime.runtimes.kata]
runtime_path = "/usr/local/bin/containerd-shim-kata-v2"
# Kata Containers is an OCI runtime, where containers are run inside lightweight
# VMs. Kata provides additional isolation towards the host, minimizing the host attack
# surface and mitigating the consequences of containers breakout.
# Kata Containers with the default configured VMM
#[crio.runtime.runtimes.kata-runtime]
# Kata Containers with the QEMU VMM
#[crio.runtime.runtimes.kata-qemu]
# Kata Containers with the Firecracker VMM
#[crio.runtime.runtimes.kata-fc]
# The crio.image table contains settings pertaining to the management of OCI images.
#
# CRI-O reads its configured registries defaults from the system wide
# containers-registries.conf(5) located in /etc/containers/registries.conf. If
# you want to modify just CRI-O, you can change the registries configuration in
# this file. Otherwise, leave insecure_registries and registries commented out to
# use the system's defaults from /etc/containers/registries.conf.
[crio.image]
# Default transport for pulling images from a remote container storage.
default_transport = "docker://"
# The path to a file containing credentials necessary for pulling images from
# secure registries. The file is similar to that of /var/lib/kubelet/config.json
global_auth_file = ""
# The image used to instantiate infra containers.
# This option supports live configuration reload.
pause_image = "k8s.gcr.io/pause:3.1"
# The path to a file containing credentials specific for pulling the pause_image from
# above. The file is similar to that of /var/lib/kubelet/config.json
# This option supports live configuration reload.
pause_image_auth_file = ""
# The command to run to have a container stay in the paused state.
# When explicitly set to "", it will fallback to the entrypoint and command
# specified in the pause image. When commented out, it will fallback to the
# default: "/pause". This option supports live configuration reload.
pause_command = "/pause"
# Path to the file which decides what sort of policy we use when deciding
# whether or not to trust an image that we've pulled. It is not recommended that
# this option be used, as the default behavior of using the system-wide default
# policy (i.e., /etc/containers/policy.json) is most often preferred. Please
# refer to containers-policy.json(5) for more details.
signature_policy = ""
# List of registries to skip TLS verification for pulling images. Please
# consider configuring the registries via /etc/containers/registries.conf before
# changing them here.
#insecure_registries = "[]"
# Controls how image volumes are handled. The valid values are mkdir, bind and
# ignore; the latter will ignore volumes entirely.
image_volumes = "mkdir"
# List of registries to be used when pulling an unqualified image (e.g.,
# "alpine:latest"). By default, registries is set to "docker.io" for
# compatibility reasons. Depending on your workload and usecase you may add more
# registries (e.g., "quay.io", "registry.fedoraproject.org",
# "registry.opensuse.org", etc.).
registries = [ "docker.io" ]
# ]
# The crio.network table containers settings pertaining to the management of
# CNI plugins.
[crio.network]
# Path to the directory where CNI configuration files are located.
network_dir = "/etc/cni/net.d/"
# Paths to directories where CNI plugin binaries are located.
plugin_dirs = [
"/opt/cni/bin/",
]
# A necessary configuration for Prometheus based metrics retrieval
[crio.metrics]
# Globally enable or disable metrics support.
enable_metrics = false
# The port on which the metrics server will listen.
metrics_port = 9090
No containerd
Packages
Have dpkg
Output of “dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"”:
ii qemu-utils 1:3.1+dfsg-8+deb10u5 amd64 QEMU utilities
No rpm
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 21 (21 by maintainers)
I tried it a bit. It seems the problem is we need the following CRIO config in order to run kata shimv2 with CRIO.
But CRIO generates its config in each bats test which does not include kata by default and does not allow outside callers to modify. IOW, IIUC, although CRI-O supports v2 shim API, it does not include a shim-v2 test suite at the moment.