moby: "Error response from daemon: service endpoint with name es already exists" when starting container

Hey guys, I have a lot of containers that use the unless-stopped policy and after a daemon restart for testing that all works fine, somehow for the es container I got the error from the title… There is just one container that is named es

I wanted to start it manually, but got the same error… this should have worked as it’s the same container that the message complaints about.

Here is some more info:

$ sudo docker info
Containers: 20
 Running: 12
 Paused: 0
 Stopped: 8
Images: 1069
Server Version: 1.10.1
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Plugins: 
 Volume: local
 Network: null host overlay bridge
Kernel Version: 4.3.0-040300-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.16 GiB
Name: seb
ID: 3IIK:AWIX:PLOR:BPQ4:XNEL:SSXQ:2GUL:VEKX:OVCQ:SCCX:MN2U:DTWH
WARNING: No swap limit support
Cluster store: consul://localhost:8500
Cluster advertise: 192.168.123.18:2375

$ sudo docker version
Client:
 Version:      1.10.1
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   9e83765
 Built:        Thu Feb 11 19:27:08 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.1
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   9e83765
 Built:        Thu Feb 11 19:27:08 2016
 OS/Arch:      linux/amd64

$ sudo docker ps -a
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS                          PORTS                  NAMES
7e63bf3754b0        hgdata1/focus:297f50a8ec59           "/bin/sh -c ./trap.sh"   18 minutes ago      Up 16 minutes                   8020/tcp               focus-v7
35828cac56ea        f06503660e96                         "bash"                   21 minutes ago      Exited (130) 9 minutes ago                             small_curie
5bf6295f7c69        hgdata1/elastalert:b846d3461eb1      "./trap.sh"              47 minutes ago      Up 43 minutes                                          elastalert
20b45a285e46        hgdata1/beats:b846d3461eb1           "./trap.sh"              47 minutes ago      Restarting (7) 14 minutes ago                          httpd-beat-packet
8765d32780c2        hgdata1/beats:b846d3461eb1           "./trap.sh"              47 minutes ago      Restarting (7) 14 minutes ago                          mysql-beat-packet
df39c284be48        hgdata1/beats:b846d3461eb1           "./trap.sh"              47 minutes ago      Restarting (7) 14 minutes ago                          beat-topbeat_httpd_bearfist
bc9e517f4e57        hgdata1/ui:88a1a097c407              "./trap.sh"              About an hour ago   Up 43 minutes                   8000/tcp               admin-a1
550953ac7e21        hgdata1/ui:88a1a097c407              "./trap.sh"              About an hour ago   Exited (0) About an hour ago                           admin-a1_previous_2
afe6d33b92ba        f06503660e96                         "/bin/sh -c ./trap.sh"   About an hour ago   Exited (0) 18 minutes ago                              focus-v7_previous_2
276bf2f70b31        hgdata1/kibana:a99520a05c03          "./trap.sh"              About an hour ago   Up 43 minutes                   5601/tcp               kibana
883d813cb85e        hgdata1/logstash:a99520a05c03        "./trap.sh"              About an hour ago   Up 43 minutes                                          logstash
62fbfcdb9b70        hgdata1/elasticsearch:a99520a05c03   "./trap.sh"              About an hour ago   Exited (128) 45 minutes ago                            es
0e982a0ba0e3        hgdata1/ldap_bearfist:16a8e5192c02   "./trap.sh"              About an hour ago   Up 43 minutes                                          ldap-bearfist-v7
db2607fda9d8        hgdata1/api:16a8e5192c02             "./trap.sh"              About an hour ago   Up 43 minutes                                          api-bearfist-v7
62742daaba39        hgdata1/persistence:16a8e5192c02     "/bin/sh -c 'sudo /tr"   About an hour ago   Up 43 minutes                   3306/tcp               db-bearfist-v7
433a6f77f1da        hgdata1/httpd:16a8e5192c02           "./trap.sh"              About an hour ago   Up 43 minutes                   0.0.0.0:443->443/tcp   httpd
e4a758e6f86a        hgdata1/ldap_admin:16a8e5192c02      "./trap.sh"              About an hour ago   Up 43 minutes                                          ldap-admin-a1
e60803dcbb63        hgdata1/api:16a8e5192c02             "./trap.sh"              About an hour ago   Up 43 minutes                                          api-admin-a1
4bcb5c831d03        hgdata1/api:16a8e5192c02             "./trap.sh"              About an hour ago   Up 43 minutes                                          api-ops-o1
892a71544263        hgdata1/modsecurity:60be76623f1f     "./trap.sh"              About an hour ago   Exited (0) About an hour ago                           modsecurity

$ sudo docker inspect es
[
    {
        "Id": "62fbfcdb9b7000113391d5a5427ae90b37b0c54252ff5c05a4dc2d9b0e4d4447",
        "Created": "2016-02-17T13:24:50.575290212Z",
        "Path": "./trap.sh",
        "Args": [],
        "State": {
            "Status": "exited",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 128,
            "Error": "service endpoint with name es already exists",
            "StartedAt": "2016-02-17T13:44:29.646211683Z",
            "FinishedAt": "2016-02-17T14:11:26.649051684Z"
        },
        "Image": "sha256:820bc8ede34a6d1d657188b28a3ab1ea9eedc248e94747ed001dc2d5c2caf18b",
        "ResolvConfPath": "/home/seb/hgdata/deployments/docker/containers/62fbfcdb9b7000113391d5a5427ae90b37b0c54252ff5c05a4dc2d9b0e4d4447/resolv.conf",
        "HostnamePath": "/home/seb/hgdata/deployments/docker/containers/62fbfcdb9b7000113391d5a5427ae90b37b0c54252ff5c05a4dc2d9b0e4d4447/hostname",
        "HostsPath": "/home/seb/hgdata/deployments/docker/containers/62fbfcdb9b7000113391d5a5427ae90b37b0c54252ff5c05a4dc2d9b0e4d4447/hosts",
        "LogPath": "/home/seb/hgdata/deployments/docker/containers/62fbfcdb9b7000113391d5a5427ae90b37b0c54252ff5c05a4dc2d9b0e4d4447/62fbfcdb9b7000113391d5a5427ae90b37b0c54252ff5c05a4dc2d9b0e4d4447-json.log",
        "Name": "/es",
        "RestartCount": 0,
        "Driver": "overlay",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/home/seb/hgdata/deployments/elk/elasticsearch:/home/elasticsearch:rw"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "backbone2",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "unless-stopped",
                "MaximumRetryCount": 0
            },
            "VolumeDriver": "",
            "VolumesFrom": [],
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "ShmSize": 67108864,
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "KernelMemory": 0,
            "Memory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null
        },
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/home/seb/hgdata/deployments/docker/overlay/e9bcae64c4da4f3d74006d6795801044340461f3479ef6af1e2dddf2af1b4404/root",
                "MergedDir": "/home/seb/hgdata/deployments/docker/overlay/0a92bf5594a6701a2557a9e39585ecbc337fd04326c2935b73edb91e2086c2dd/merged",
                "UpperDir": "/home/seb/hgdata/deployments/docker/overlay/0a92bf5594a6701a2557a9e39585ecbc337fd04326c2935b73edb91e2086c2dd/upper",
                "WorkDir": "/home/seb/hgdata/deployments/docker/overlay/0a92bf5594a6701a2557a9e39585ecbc337fd04326c2935b73edb91e2086c2dd/work"
            }
        },
        "Mounts": [
            {
                "Source": "/home/seb/hgdata/deployments/elk/elasticsearch",
                "Destination": "/home/elasticsearch",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "62fbfcdb9b70",
            "Domainname": "",
            "User": "elasticsearch",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "9200/tcp": {},
                "9300/tcp": {}
            },
            "Tty": true,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "./trap.sh"
            ],
            "Image": "hgdata1/elasticsearch:a99520a05c03",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "7bca83422a8c4d24e9eb786262bd0368802dd3f2693343d0431ffaeb1614f098",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "/var/run/docker/netns/7bca83422a8c",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "backbone2": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "38a2aa06eb3bae5f48b988f7a9cf7950b5cc6798243554da1f38b5f53b9fcffd",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                },
                "bridge": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "0df17fcb288d7a4e9b77815f78388dfa494f16b35dffeb38122f92220bc61a99",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }
            }
        }
    }
]

$ sudo docker start es
Error response from daemon: service endpoint with name es already exists
Error: failed to start containers: es

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 7
  • Comments: 73 (16 by maintainers)

Commits related to this issue

Most upvoted comments

@sebi-hgdata can you share the output for docker network inspect backbone2 If it is an overlay network, we can see this error if a container with the name es exists in the same network in any other node.

Also, you could do docker network disconnect -f backbone2 es to forcefully remove a stale endpoint from a network (from any node in the docker cluster).

I’ve seen this happen too but the

docker network disconnect -f <net> <con>

command seems to do the trick

Reproduced with docker 1.12.3 after a filesystem full on /. A service docker restart have clean the docker network from all shadow container not running.

Here’s an even easier proof of concept, using just bash, jq and curl:

nid=$(docker network create test1)
cid=$(curl --unix-socket /var/run/docker.sock --data "{\"Cmd\": [\"sleep\", \"500\"], \"Image\": \"alpine\", \"NetworkingConfig\": {\"EndpointsConfig\": {\"$nid\": {} } } }" http:/containers/create --header "Content-Type:application/json" | jq -r .Id)
docker start $cid

This gives an error like service endpoint with name berserk_banach already exists about 80% of the time.

But this works:

nid=$(docker network create test1)
cid=$(curl --unix-socket /var/run/docker.sock --data "{\"Cmd\": [\"sleep\", \"500\"], \"Image\": \"alpine\", \"NetworkingConfig\": {\"EndpointsConfig\": {\"test1\": {} } } }" http:/containers/create --header "Content-Type:application/json" | jq -r .Id)
docker start $cid

Interestingly, this issue is not reproducible if you create a container using the docker cli

This works perfectly

nid=$(docker network create test1)
cid=$(docker create --net=$nid alpine sleep 500)
docker start $cid

I think this issue still exist and is a bug.

This bug occurs for me when starting the docker container service via docker-compose on another docker-machine or host with network overlay. Docker will try to create the service with the same name (<project>.<service>.1) conflicting with what was on the other docker-machine (<project>.<service>.1).

ERROR: for registrator  service endpoint with name swarmnodes_registrator_1 already exists

ERROR: for consul-agent  service endpoint with name swarmnodes_consul-agent_1 already exists

@mavenugo migrating to swarm-mode is very difficult, because there are no alternative for docker-compose (multihost). Docker bundles does not support many features and they are experimental.

$ docker-compose up -d WARNING: The Docker Engine you’re using is running in swarm mode.

Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.

To deploy your application across the swarm, use the bundle feature of the Docker experimental build.

see: https://github.com/docker/compose/issues/3868

Same issue here. I think it is related to libnetwork or libkv changes. Worth looking into what changes, if any were done prior to the version references in https://github.com/moby/moby/issues/20398#issuecomment-362659950

The issue stems from the fact that key-value store still has and entry for that container in docker/network/v1.0/endpoint and also has a reference to the container count in docker/network/v1.0/endpoint_count.

For example, I have 2 hosts:

  • Host A: 2 running containers, worker1 and worker2
  • Host B: 1 container in Created state.

Nevertheless, when I query my KV store, I get {"Count":4}. Well,

# curl -L -s "http://consul:8500/v1/kv/docker/network/v1.0/endpoint_count/?recurse&pretty"
[
    {
        "LockIndex": 0,
        "Key": "docker/network/v1.0/endpoint_count/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/",
        "Flags": 0,
        "Value": "eyJDb3VudCI6NH0=",
        "CreateIndex": 124226,
        "ModifyIndex": 124256
    }
]

# echo "eyJDb3VudCI6NH0=" | base64 -d
{"Count":4}

Next, I remove the container in Created state from Host B.

# docker rm ae1c9eba7ba6
ae1c9eba7ba6

I repeat the KV store query and still get {"Count":4}. That’s expected, because LibNetwork perform network operations when it stops a container. There are no operations on removal.

Next, I get the keys from each endpoint on that network:

curl -L -s "http://consul:8500/v1/kv/docker/network/v1.0/endpoint/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/?recurse&pretty"

The output will have 4 keys.

After decoding the keys one by one I discover the stale endpoint (note the container was removed already). This means that either Libnetwork did not make a call to remove or LibKV was not successful in updating the store.

# curl -L -s "http://consul:8500/v1/kv/docker/network/v1.0/endpoint/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/45fc7ece9c828ea63670d1b9c3188250408147f5ed9c1c83cae58d1c938f5194/?recurse&pretty"

[
    {
        "LockIndex": 0,
        "Key": "docker/network/v1.0/endpoint/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/45fc7ece9c828ea63670d1b9c3188250408147f5ed9c1c83cae58d1c938f5194/",
        "Flags": 0,
        "Value": "eyJhbm9ueW1vdXMiOmZhbHNlLCJkaXNhYmxlUmVzb2x1dGlvbiI6ZmFsc2UsImVwX2lmYWNlIjp7ImFkZHIiOiIxMC4xMC4xMC4yLzI0IiwiZHN0UHJlZml4IjoiZXRoIiwibWFjIjoiMDI6OGE6MjI6ZDI6M2Q6ZjQiLCJyb3V0ZXMiOm51bGwsInNyY05hbWUiOiI0NWZjN2VjZTljODI4X2MiLCJ2NFBvb2xJRCI6Ikdsb2JhbERlZmF1bHQvMTAuMTAuMTAuMC8yNCIsInY2UG9vbElEIjoiIn0sImV4cG9zZWRfcG9ydHMiOltdLCJnZW5lcmljIjp7ImNvbS5kb2NrZXIubmV0d29yay5lbmRwb2ludC5leHBvc2VkcG9ydHMiOltdLCJjb20uZG9ja2VyLm5ldHdvcmsucG9ydG1hcCI6W119LCJpZCI6IjQ1ZmM3ZWNlOWM4MjhlYTYzNjcwZDFiOWMzMTg4MjUwNDA4MTQ3ZjVlZDljMWM4M2NhZTU4ZDFjOTM4ZjUxOTQiLCJsb2NhdG9yIjoiIiwibXlBbGlhc2VzIjpudWxsLCJuYW1lIjoid29ya2VyMyIsInNhbmRib3giOiI5NjFhN2Q3YjkwNjY1NzkyOTUxNGNiMzNhZDJkOGMxMDRlZWYwNWMyNDA1OTI0ZTBlMjg0YWJhNTAzNWVjNDQ1In0=",
        "CreateIndex": 124239,
        "ModifyIndex": 124241
    }
]

Decoding the Value:

# echo "eyJhbm9ueW1vdXMiOmZhbHNlLCJkaXNhYmxlUmVzb2x1dGlvbiI6ZmFsc2UsImVwX2lmYWNlIjp7ImFkZHIiOiIxMC4xMC4xMC4yLzI0IiwiZHN0UHJlZml4IjoiZXRoIiwibWFjIjoiMDI6OGE6MjI6ZDI6M2Q6ZjQiLCJyb3V0ZXMiOm51bGwsInNyY05hbWUiOiI0NWZjN2VjZTljODI4X2MiLCJ2NFBvb2xJRCI6Ikdsb2JhbERlZmF1bHQvMTAuMTAuMTAuMC8yNCIsInY2UG9vbElEIjoiIn0sImV4cG9zZWRfcG9ydHMiOltdLCJnZW5lcmljIjp7ImNvbS5kb2NrZXIubmV0d29yay5lbmRwb2ludC5leHBvc2VkcG9ydHMiOltdLCJjb20uZG9ja2VyLm5ldHdvcmsucG9ydG1hcCI6W119LCJpZCI6IjQ1ZmM3ZWNlOWM4MjhlYTYzNjcwZDFiOWMzMTg4MjUwNDA4MTQ3ZjVlZDljMWM4M2NhZTU4ZDFjOTM4ZjUxOTQiLCJsb2NhdG9yIjoiIiwibXlBbGlhc2VzIjpudWxsLCJuYW1lIjoid29ya2VyMyIsInNhbmRib3giOiI5NjFhN2Q3YjkwNjY1NzkyOTUxNGNiMzNhZDJkOGMxMDRlZWYwNWMyNDA1OTI0ZTBlMjg0YWJhNTAzNWVjNDQ1In0=" | base64 -d | python -m json.tool

{
    "anonymous": false,
    "disableResolution": false,
    "ep_iface": {
        "addr": "10.10.10.2/24",
        "dstPrefix": "eth",
        "mac": "02:8a:22:d2:3d:f4",
        "routes": null,
        "srcName": "45fc7ece9c828_c",
        "v4PoolID": "GlobalDefault/10.10.10.0/24",
        "v6PoolID": ""
    },
    "exposed_ports": [],
    "generic": {
        "com.docker.network.endpoint.exposedports": [],
        "com.docker.network.portmap": []
    },
    "id": "45fc7ece9c828ea63670d1b9c3188250408147f5ed9c1c83cae58d1c938f5194",
    "locator": "",
    "myAliases": null,
    "name": "worker3",
    "sandbox": "961a7d7b906657929514cb33ad2d8c104eef05c2405924e0e284aba5035ec445"
}

Now, to make it go away, I need to remove that endpoint entry:

curl -L -s --request DELETE "http://consul:8500/v1/kv/docker/network/v1.0/endpoint/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/45fc7ece9c828ea63670d1b9c3188250408147f5ed9c1c83cae58d1c938f5194/?recurse&pretty"
true

Also, update the Count by decreasing by 1, i.e. from 4 to 3:

# curl -X PUT -d '{"Count":3}' "http://consul:8500/v1/kv/docker/network/v1.0/endpoint_count/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/"
true

# curl "http://consul:8500/v1/kv/docker/network/v1.0/endpoint_count/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/?recurse&pretty"
[
    {
        "LockIndex": 0,
        "Key": "docker/network/v1.0/endpoint_count/b55f33a1b81f824e2563ebcae64d6ace0f976f32a450dc36a5ea29b998f9d5be/",
        "Flags": 0,
        "Value": "eyJDb3VudCI6M30=",
        "CreateIndex": 124226,
        "ModifyIndex": 139139
    }
]

# echo "eyJDb3VudCI6M30=" | base64 -d
{"Count":3}

After the above I can start my container worker3 successfully:

# docker run -d -t --net=mynet --name=worker3 centos
d12ddf824c4c9861dd22cb4ffd48a55b782b5511e9444452665af32a8200473e

We also seem to have this issue when usuing docker,

docker info output:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 25
Server Version: 1.12.6
Storage Driver: overlay
 Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host overlay bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.9.0
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: aarch64
CPUs: 64
Total Memory: 125.8 GiB
Name: test2.arm32.com.local.lan
ID: MIGN:UBGH:K4WC:MJB3:QOKV:O2DM:SFFD:A5II:LY4Z:FJ5G:AEFW:5BOF
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8

Can someone please re-open this issue?

I have run into this issue as well after deleting and recreating containers on the same network. It does seem to be sporadic, but has occurred at least a handful of times after starting/stopping ~500 containers. The container in question does NOT appear in the docker network inspect <network_name> command but it DOES appear to be successfully cleared when manually doing a force remove using swarm network disconnect -f <network_name> <container_name>.

In the syslog after removing the network I even see the failure to delete the endpoint:

Mar 14 18:23:57 SERVER docker[14917]: time="2016-03-14T18:23:57.780234901Z" level=warning msg="driver error deleting endpoint <container_name> : endpoint id \"6524d91e5889d4d7baed97a89d58cb9dae6c4c74bc1a376359a6bc636af7959a\" not found"

@winggundamth this error will be seen if there is another active container in the same network in any other node that might be pointing to the same KV-Store (hence makes it part of the same cluster). Can you please confirm if you have a similar setup in another node and executed the same compose file (in the same directory structure) ? If yes, the simplest way to confirm the root-cause is to create a new directory with an unique directory name (& copy the compose file into it) and docker-compose up on that. Can you pls confirm if that works ?