kubernetes: `kubectl logs` panics when container has `allowPrivilegeEscalation: false`

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

For a pod that has process erroring out, when checking for logs, kubectl panics.

What you expected to happen:

I expected it to show logs

How to reproduce it (as minimally and precisely as possible):


# kubectl create ns scc
namespace "scc" created
# kubectl config set-context $(kubectl config current-context) --namespace scc
Context "kubernetes-admin@kubernetes" modified.

using this config

# cat security-context.yaml
apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  volumes:
  - name: sec-ctx-vol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: gcr.io/google-samples/node-hello:1.0
    volumeMounts:
    - name: sec-ctx-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
# kubectl create -f security-context.yaml
pod "security-context-demo" created


                                 
# kubectl get pods -w             
NAME                    READY     STATUS              RESTARTS   AGE                           
security-context-demo   0/1       ContainerCreating   0          9s                            
security-context-demo   0/1       Error     0         10s                                      
^C#                               
                                 
                                 
# kubectl logs security-context-demo                                              
panic: standard_init_linux.go:178: exec user process caused "operation not permitted" [recovered]                                                                                             
        panic: standard_init_linux.go:178: exec user process caused "operation not permitted"  
goroutine 1 [running, locked to thread]:       
github.com/urfave/cli.HandleAction.func1(0xc4200af7a0)                                         
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/Godeps/_workspace/src/github.com/urfave/cli/app.go:478 +0x23f     
panic(0x6f0ae0, 0xc42011ec40)                  
        /usr/lib/golang/src/runtime/panic.go:489 +0x2cf                                        
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization.func1(0xc4200af208, 0xc42000e078, 0xc4200af2a8)                                                               
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:259 +0xc1                     
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization(0xc4200505f0, 0xaa88e0, 0xc42011ec40)                                                                         
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:277 +0x353                    
main.glob..func8(0xc42007c780, 0x0, 0x0)       
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/main_unix.go:26 +0x66                                             
reflect.Value.call(0x6d9a00, 0x750160, 0x13, 0x73b762, 0x4, 0xc4200af760, 0x1, 0x1, 0xc4200af6f0, 0x731240, ...)                                                                              
        /usr/lib/golang/src/reflect/value.go:434 +0x91f                                        
reflect.Value.Call(0x6d9a00, 0x750160, 0x13, 0xc4200af760, 0x1, 0x1, 0x665966, 0x73b8ee, 0x4)  
        /usr/lib/golang/src/reflect/value.go:302 +0xa4                                         
github.com/urfave/cli.HandleAction(0x6d9a00, 0x750160, 0xc42007c780, 0x0, 0x0)                 
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/Godeps/_workspace/src/github.com/urfave/cli/app.go:487 +0x18f     
github.com/urfave/cli.Command.Run(0x73b90e, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x74d19f, 0x51, 0x0, ...)                                                                                           
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/Godeps/_workspace/src/github.com/urfave/cli/command.go:191 +0xac8 
github.com/urfave/cli.(*App).Run(0xc4200ca000, 0xc42000c140, 0x2, 0x2, 0x0, 0x0)               
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/Godeps/_workspace/src/github.com/urfave/cli/app.go:240 +0x5d6     
main.main()                                    
        /builddir/build/BUILD/docker-c4618fb6bf4058dcde877f773cfd4afb5abe626c/runc-31a9f6e22729606814e9bcbcf9eeebc1887527cb/main.go:137 +0xbd2                                                

But if I run a normal(non-failing) pod it works fine

# kubectl run web --image centos/httpd 
deployment "web" created

# kubectl get pods -w
NAME                    READY     STATUS              RESTARTS   AGE
security-context-demo   0/1       CrashLoopBackOff    3          1m
web-b75668b7-hrkzx      0/1       ContainerCreating   0          6s
security-context-demo   0/1       Error     4         2m
web-b75668b7-hrkzx   1/1       Running   0         12s
security-context-demo   0/1       CrashLoopBackOff   4         2m
^C# 

# kubectl logs web-b75668b7-hrkzx
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message
# 

Anything else we need to know?:

Environment:

  • Kubernetes version:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration**:

Machines started using following Vagrantfile

$ cat Vagrantfile 
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|

  config.vm.define "c1" do |c1|
    c1.vm.box = "centos/7"
    c1.vm.hostname = "c1"
  end

  config.vm.define "w1" do |w1|
    w1.vm.box = "centos/7"
    w1.vm.hostname = "w1"
  end

  config.vm.define "w2" do |w2|
    w2.vm.box = "centos/7"
    w2.vm.hostname = "w2"
  end

  config.vm.provider "libvirt" do |libvirt, override|
    libvirt.driver = "kvm"
    libvirt.memory = 2048
    libvirt.cpus = 2
    libvirt.cpu_mode = 'host-passthrough'
  end

end
  • OS:
# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel:
# uname -a
Linux c1 3.10.0-693.2.1.el7.x86_64 #1 SMP Wed Sep 6 20:06:13 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:

Used kubeadm to install the 1 master 2 node cluster

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 33 (33 by maintainers)

Most upvoted comments

@dims This worked for me:

# cat security-context.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  volumes:
  - name: sec-ctx-vol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: gcr.io/google-samples/node-hello:1.0
    volumeMounts:
    - name: sec-ctx-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
      seLinuxOptions:
        type: "docker_t"

and checking out the pod

# docker inspect 5db697e37590 | grep docker_t
        "ProcessLabel": "system_u:system_r:docker_t:s0:c147,c992",
                "label=type:docker_t",