origin: containers fail to start: "read-only file system"

I’m running Fedora Rawhide (27). I just updated all packages and rebooted. I started my local cluster using oc cluster up. I continued with an application I want to run inside openshift:

$ oc new-app tt/weechat
W0705 09:00:58.997796   31726 newapp.go:333] Could not find an image stream match for "tt/weechat:latest". Make sure that a Docker image with that tag is available on the node for the deployment to succeed.
--> Found Docker image 388fb30 (43 minutes old) from  for "tt/weechat:latest"

    * This image will be deployed in deployment config "weechat"
    * The image does not expose any ports - if you want to load balance or send traffic to this component
      you will need to create a service with 'expose dc/weechat --port=[port]' later
    * WARNING: Image "tt/weechat:latest" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    deploymentconfig "weechat" created
--> Success
    Run 'oc status' to view your app.

$ oc status
In project My Project (myproject) on server https://192.168.1.5:8443

dc/weechat deploys docker.io/tt/weechat:latest
  deployment #1 failed 2 minutes ago: config change

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

$ oc get pods
NAME               READY     STATUS               RESTARTS   AGE
weechat-1-deploy   0/1       ContainerCannotRun   0          2m

$ oc logs weechat-1-deploy
container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/origin/openshift.local.volumes/pods/b30301ac-614f-11e7-8ffd-68f728aba37f/volumes/kubernetes.io~secret/deployer-token-49gkv\\\" to rootfs \\\"/var/lib/docker/devicemapper/mnt/09ebd43f1e46533888fef04e3e5de4904dc86028227851b768f52ee8cdb4692a/rootfs\\\" at \\\"/var/lib/docker/devicemapper/mnt/09ebd43f1e46533888fef04e3e5de4904dc86028227851b768f52ee8cdb4692a/rootfs/run/secrets/kubernetes.io/serviceaccount\\\" caused \\\"mkdir /var/lib/docker/devicemapper/mnt/09ebd43f1e46533888fef04e3e5de4904dc86028227851b768f52ee8cdb4692a/rootfs/run/secrets/kubernetes.io: read-only file system\\\"\""

This runs under loopback graph backend. Same thing happens with overlay2 backend. I even tried new-app fedora and got the same error message.

Version
$ oc version
oc v1.5.1
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.1.5:8443
openshift v1.5.1+7b451fc
kubernetes v1.5.2+43a9be4

I presume you’ll be interested in what docker I’m running:

$ docker info
Containers: 162
 Running: 1
 Paused: 0
 Stopped: 161
Images: 189
Server Version: 1.13.1
Storage Driver: devicemapper
 Pool Name: docker-253:0-268692144-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 10.69 GB
 Data Space Total: 107.4 GB
 Data Space Available: 85.44 GB
 Metadata Space Used: 21.74 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.126 GB
 Thin Pool Minimum Free Space: 10.74 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.141 (2017-06-28)
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: oci runc
Default Runtime: oci
Init Binary: /usr/libexec/docker/docker-init-current
containerd version:  (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.11.6-301.fc26.x86_64
Operating System: Fedora 27 (Rawhide)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 4
Total Memory: 11.44 GiB
Name: oat
ID: YC7N:MYIE:6SEL:JYLU:SRIG:PCVV:APZD:WTH4:4MGR:N4BG:CT53:ZW2O
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: tomastomecek
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 172.30.0.0/16
 127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)

$ docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-20.git27e468e.fc27.x86_64
 Go version:      go1.8.1
 Git commit:      27e468e/1.13.1
 Built:           Fri Jun 23 14:21:07 2017
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-20.git27e468e.fc27.x86_64
 Go version:      go1.8.1
 Git commit:      27e468e/1.13.1
 Built:           Fri Jun 23 14:21:07 2017
 OS/Arch:         linux/amd64
 Experimental:    false

All the software is coming from Fedora RPM repositories.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 28 (14 by maintainers)

Commits related to this issue

Most upvoted comments

Also, in case you need to get going and are blocked by this, the workaround is to:

rm -rf /usr/share/rhel/secrets

You don’t need to restart docker and cluster up will instantly work again.

I’m bumping this to P0, this is literally blocking any work on fedora, I’m on F26.

Update the karma to get it out of testing, please.

@runcom thank you - will give it a try

Does @runcom fixes https://bugzilla.redhat.com/show_bug.cgi?id=1504709#c28 for this solve this issue as well?

I’m looking into this