moby: Cannot remove network due to task

Description

Steps to reproduce the issue:

  1. create network with this command: docker network create --attachable --driver overlay cluster-network
  2. run a couple of service in swarm mode and delete all services
  3. trying to delete network: docker network rm cluster-network

Describe the results you received: docker network rm cluster-network Error response from daemon: rpc error: code = 9 desc = network qytxrqgp7pw1915tqhdnkd4si is in use by task 8ruj7pjh65g9du0m1y7ce476i

Describe the results you expected: Want to delete network with proper descritpion. what is the task?

Additional information you deem important (e.g. issue happens only occasionally):

docker network inspect cluster-network
[
    {
        "Name": "cluster-network",
        "Id": "qytxrqgp7pw1915tqhdnkd4si",
        "Created": "0001-01-01T00:00:00Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": true,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": null
    }
]

Output of docker version:

Client:
 Version:      1.13.0
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:58:26 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:58:26 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 4
 Running: 0
 Paused: 0
 Stopped: 4
Images: 129
Server Version: 1.13.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 106
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: 7tri47t51271szj46y1sysjcf
 Is Manager: true
 ClusterID: 8jnswr0kvdlavkn0puuuhljxd
 Managers: 3
 Nodes: 6
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 192.168.0.27
 Manager Addresses:
  192.168.0.27:2377
  192.168.0.32:2377
  192.168.0.33:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-62-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.953 GiB
Name: mngr01
ID: YNJP:5BEI:W4UN:NPUK:EJ3R:CYBC:RBMW:GO2Q:ASJA:PDTT:TZBK:CYWQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.): VMware vshpere, ubuntu16.04 host

About this issue

  • Original URL
  • State: open
  • Created 7 years ago
  • Reactions: 7
  • Comments: 48 (9 by maintainers)

Most upvoted comments

I had to restart the docker daemon on the swarm master to get rid of the task systemctl restart docker Then docker network rm <network-id>

Decided to tackle with this on my home computer, and seems like that I can reproduce this with the latest versions and also in a new freshly installed docker. I try to write some MVP reproducible script for debugging this, since currently I’m not entirely sure if there is something funky happening in the container itself or is this related to somewhere else.

Sorry @thaJeztah I missed your comment. Yes, the task is not defined on any of our manager nodes; it seems to simply not exist. We can’t inspect tasks from worker nodes but have confirmed that the rates_default network doesn’t extend to any of them.

Right now our only workaround has been to deploy our stack to a second rates2_default network and update all of our references – the rates_default orphan is still unable to be removed.

Update: my bad, actually, I had a container (not started with service create) running attached to the network (the network has --attachable set). So in my case it actually was a problem of communicating that a task was attached to the network while it was only a normal container.

Experiencing here as well - Server Version: 17.05.0-ce-rc1 in docker-for-aws

 docker info
Containers: 5
 Running: 5
 Paused: 0
 Stopped: 0
Images: 14
Server Version: 17.05.0-ce-rc1
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: 4ypru6pvfe8mxublqrupl5uyv
 Is Manager: true
 ClusterID: xdahte25v3j6i0cjjewbnzwel
 Managers: 3
 Nodes: 9
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 172.31.12.245
 Manager Addresses:
  172.31.12.245:2377
  172.31.29.200:2377
  172.31.33.4:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.21-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.854GiB
Name: ip-172-31-12-245.us-west-2.compute.internal
ID: PWGX:D5OW:MNZC:KTLN:GJ3V:Z43D:7SZC:GWQ3:XPP7:WB5O:OVDY:YCSB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 179
 Goroutines: 349
 System Time: 2017-05-23T13:49:37.335845818Z
 EventsListeners: 0
Username: wedeployci
Registry: https://index.docker.io/v1/
Labels:
 com.wedeploy.node.type=manager
 os=linux
 region=us-west-2
 availability_zone=us-west-2a
 instance_type=t2.medium
 node_type=manager
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Note that I removed all the services and then inspecting the task gives me:

~ $ docker network rm mynet 
Error response from daemon: rpc error: code = 9 desc = network l58m22zj9z9t8xhu9tgndtdnl is in use by task 8awbbulm89s5tuo08nt82cxih
~ $ docker inspect 8awbbulm89s5tuo08nt82cxih
[
    {
        "ID": "",
        "Version": {},
        "CreatedAt": "0001-01-01T00:00:00Z",
        "UpdatedAt": "0001-01-01T00:00:00Z",
        "Labels": null,
        "Spec": {
            "ContainerSpec": {},
            "ForceUpdate": 0
        },
        "Status": {
            "Timestamp": "0001-01-01T00:00:00Z",
            "ContainerStatus": {},
            "PortStatus": {}
        }
    }
]

I’m experiencing same issue running 17.03.0-ce

root@dk1w:~# docker info
Containers: 14
 Running: 10
 Paused: 0
 Stopped: 4
Images: 21
Server Version: 17.03.0-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 255
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: l5ug7tsu7wyjd2n1qeersvo0u
 Is Manager: false
 Node Address: 192.168.100.211
 Manager Addresses:
  192.168.100.201:2377
  192.168.100.202:2377
  192.168.100.203:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 977c511eda0925a723debdc94d09459af49d082a
runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-64-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 5.823 GiB
Name: dk1w
ID: ......
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: sitamet
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: true
Insecure Registries:
 registry.......
 127.0.0.0/8
Live Restore Enabled: false

Issue still persists in 20.10.8. Currently we restart the docker daemons and that will allow us to remove the network. Really only useful in testing and is no real solution long term.

Been having occasionally the same issue when working with stacks. I can reproduce this when running script that starts swarm stack and then connects few containers to this stack’s network. This issue rises when I CTRL-c out of this script after stack has been created and maybe one or two container has been connected to the stack’s network.

After CTRL-c, I can then prune everything related to stack and these spawned containers, but just can’t delete the network even though there are no containers or services running related to it. I can write a minimal script for helping to debug this, but since this occurs on a work-related script I need to rewrite something similar.

Necessary info:

$ docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:43:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false
~ $ docker network rm locklib_default
Error response from daemon: rpc error: code = FailedPrecondition desc = network prpox4dh2qjx2ittlwtr60eno is in use by task oowhiznjv7vszezqll02cr29o
$ docker inspect locklib_default
[
    {
        "Name": "locklib_default",
        "Id": "prpox4dh2qjx2ittlwtr60eno",
        "Created": "2019-05-21T06:01:19.8515193Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": {
            "com.docker.stack.namespace": "locklib"
        }
    }
]
$ docker inspect oowhiznjv7vszezqll02cr29o
[
    {
        "ID": "oowhiznjv7vszezqll02cr29o",
        "Version": {
            "Index": 19272
        },
        "CreatedAt": "2019-05-21T06:08:54.0494636Z",
        "UpdatedAt": "2019-05-21T06:08:54.2787945Z",
        "Labels": {},
        "Spec": {
            "NetworkAttachmentSpec": {
                "ContainerID": "c4e0a35fb0fd79fcaadb9d6c34ac31e3385810fc5a961b70726617345b144086"
            },
            "Networks": [
                {
                    "Target": "prpox4dh2qjx2ittlwtr60eno"
                }
            ],
            "ForceUpdate": 0,
            "Runtime": "attachment"
        },
        "NodeID": "wvfx5gq6b8o0taaxj9twogsio",
        "Status": {
            "Timestamp": "2019-05-21T06:08:54.2280174Z",
            "State": "running",
            "Message": "started",
            "PortStatus": {}
        },
        "DesiredState": "running",
        "NetworksAttachments": [
            {
                "Network": {
                    "ID": "prpox4dh2qjx2ittlwtr60eno",
                    "Version": {
                        "Index": 19220
                    },
                    "CreatedAt": "2019-05-21T06:01:19.8515193Z",
                    "UpdatedAt": "2019-05-21T06:06:10.7315856Z",
                    "Spec": {
                        "Name": "locklib_default",
                        "Labels": {
                            "com.docker.stack.namespace": "locklib"
                        },
                        "DriverConfiguration": {
                            "Name": "overlay"
                        },
                        "Attachable": true,
                        "Scope": "swarm"
                    },
                    "DriverState": {
                        "Name": "overlay",
                        "Options": {
                            "com.docker.network.driver.overlay.vxlanid_list": "4097"
                        }
                    },
                    "IPAMOptions": {
                        "Driver": {
                            "Name": "default"
                        },
                        "Configs": [
                            {
                                "Subnet": "10.0.0.0/24",
                                "Gateway": "10.0.0.1"
                            }
                        ]
                    }
                },
                "Addresses": [
                    "10.0.0.9/24"
                ]
            }
        ]
    }
]

I have same issue in Docker 18.06 ((