moby: Container can not start after docker engine upgrade from 17.03.1-ce to 17.06.0-ce-rc3

Description

Container can not start after docker engine upgrade from 17.03.1-ce to 17.06.0-ce-rc3

Steps to reproduce the issue:

  1. At docker 17.03.1-ce, docker run -itd --name test --restart always nginx
  2. upgrade docker engine to 17.06.0-ce-rc3
  3. docker start test

Describe the results you received:

Error response from daemon: oci runtime error: container with id exists: 23f59a891ed34914345852ce0b39493a2e6750aabff26457194721730652a532
Error: failed to start containers: test

Describe the results you expected: Container will restart automation after upgrade, and can be start by docker start command.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:      17.06.0-ce-rc3
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   7953dbc
 Built:        Tue Jun 13 08:18:34 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.0-ce-rc3
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   7953dbc
 Built:        Tue Jun 13 08:17:27 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 3
 Running: 1
 Paused: 0
 Stopped: 2
Images: 9
Server Version: 17.06.0-ce-rc3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 53
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: nyzcah84bhar0e20lfvyfszq0
 Is Manager: true
 ClusterID: 4s0b9dwtroj8nlc0x5iaetek7
 Managers: 2
 Nodes: 2
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Root Rotation In Progress: false
 Node Address: 192.168.33.229
 Manager Addresses:
  192.168.33.229:2377
  192.168.33.230:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
 apparmor
Kernel Version: 4.4.0-63-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.954GiB
Name: c1273bf40ae9c40e9a5f4ed1080f89c1a-node1
ID: 35LF:OVBC:A5RH:HVMY:2PDQ:YZFK:3267:BHJJ:4Y3Y:2IXU:2VSD:KE7F
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Registry Mirrors:
  xxx
Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 17 (12 by maintainers)

Most upvoted comments

I’ve just upgraded from 17.03 to 17.06 on AWS, and I’m seeing the same problem. Container was created with restart: always. Had to docker rm -f the containers. This was the cwspear/docker-local-persist-volume-plugin, and I created it as a service using ansible:

  docker_service:
    project_name: driver-local-persist
    definition:
      version: '3'
      services:
        driver-persist:
          image: cwspear/docker-local-persist-volume-plugin
          volumes:
          - /run/docker/plugins/:/run/docker/plugins
          - /plugin-data:/var/lib/docker/plugin-data
          - /data:/data
          restart: always
          deploy:
            mode: global
  tags: services

Funny enough containers created manually on the cli with docker service had no problems.