ddev: Multiple ambiguous docker networks created
Is there an existing issue for this?
- I have searched the existing issues
Output of ddev debug test
Expand `ddev debug test` diagnostic information
Running bash [-c /var/folders/1m/cv08gd016zs66h2p26szn0840000gn/T/test_ddev.sh]
======= Existing project config =========
These config files were loaded for project npg: [/Users/peter/projects/npg/npg_web/.ddev/config.yaml]
name: npg
type: php
docroot: htdocs
php_version: 8.1
webserver_type: apache-fpm
webimage: drud/ddev-webserver:v1.21.5
router_http_port: 80
router_https_port: 443
additional_hostnames: []
additional_fqdns: []
mariadb_version: 10.5
database: {mariadb 10.5}
mailhog_port: 8025
mailhog_https_port: 8026
phpmyadmin_port: 8036
phpmyadmin_https_port: 8037
project_tld: ddev.site
use_dns_when_possible: true
nodejs_version: 16
default_container_timeout: 120
======= Creating dummy project named tryddevproject-9274 in ../tryddevproject-9274 =========
OS Information: Darwin MBP2021.local 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000 arm64
ProductName: macOS
ProductVersion: 13.4.1
BuildVersion: 22F82
User information: uid=501(peter) gid=20(staff) groups=20(staff),12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),701(com.apple.sharepoint.group.1),33(_appstore),100(_lpoperator),204(_developer),250(_analyticsusers),395(com.apple.access_ftp),398(com.apple.access_screensharing),399(com.apple.access_ssh),400(com.apple.access_remote_ae)
DDEV version: ITEM VALUE
DDEV version v1.21.6
architecture arm64
db drud/ddev-dbserver-mariadb-10.4:v1.21.5
dba phpmyadmin:5
ddev-ssh-agent drud/ddev-ssh-agent:v1.21.5
docker 24.0.4
docker-compose v2.15.1
docker-platform docker
mutagen 0.16.0
os darwin
router drud/ddev-router:v1.21.5
web drud/ddev-webserver:v1.21.5
PROXY settings: HTTP_PROXY='' HTTPS_PROXY='' http_proxy='' NO_PROXY=''
======= DDEV global info =========
Global configuration:
instrumentation-opt-in=true
omit-containers=[]
mutagen-enabled=false
nfs-mount-enabled=false
router-bind-all-interfaces=false
internet-detection-timeout-ms=3000
disable-http2=false
use-letsencrypt=false
letsencrypt-email=
table-style=default
simple-formatting=false
auto-restart-containers=false
use-hardened-images=false
fail-on-hook-fail=false
required-docker-compose-version=
use-docker-compose-from-path=false
project-tld=
xdebug-ide-location=
no-bind-mounts=false
use-traefik=false
wsl2-no-windows-hosts-mgt=false
======= DOCKER info =========
docker location: lrwxr-xr-x 1 root wheel 53 24 Jul 15:42 /usr/local/bin/docker -> /Applications/OrbStack.app/Contents/MacOS/xbin/docker
Docker Desktop Version: Docker Desktop for Mac 4.21.1 build 114176
docker version:
Client:
Version: 24.0.4
API version: 1.43
Go version: go1.20.5
Git commit: 3713ee1
Built: Fri Jul 7 14:47:27 2023
OS/Arch: darwin/arm64
Context: default
Server:
Engine:
Version: 24.0.4
API version: 1.43 (minimum version 1.12)
Go version: go1.20.6
Git commit: 4ffc61430bbe6d3d405bdf357b766bf303ff3cc5
Built: Fri Jul 14 13:18:38 2023
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: v1.7.2
GitCommit: 0cae528dd6cb557f7201036e9f43420650207b58
runc:
Version: 1.1.8
GitCommit: 82f18fe0e44a59034f3e1f45e475fa5636e539aa
docker-init:
Version: 0.19.0
GitCommit:
DOCKER_DEFAULT_PLATFORM=notset
======= Mutagen Info =========
======= Docker Info =========
Docker platform: docker
Using docker context: default (unix:///Users/peter/.orbstack/run/docker.sock)
docker-compose: v2.15.1
Using DOCKER_HOST=unix:///Users/peter/.orbstack/run/docker.sock
Docker version: 24.0.4
Able to run simple container that mounts a volume.
Able to use internet inside container.
Docker disk space:
Filesystem Size Used Available Use% Mounted on
overlay 264.2G 5.5G 258.8G 2% /
Container ddev-npg-elasticsearch Removed
Container ddev-npg-db Removed
Container ddev-npg-web Removed
Container ddev-npg-dba Removed
Network ddev-npg_default Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints
Failed to docker-compose down: ComposeCmd failed to run 'COMPOSE_PROJECT_NAME=ddev-npg docker-compose -f /Users/peter/projects/npg/npg_web/.ddev/.ddev-docker-compose-full.yaml down', action='[down]', err='exit status 1', stdout='', stderr='Container ddev-npg-web Stopping
Container ddev-npg-web Stopping
Container ddev-npg-dba Stopping
Container ddev-npg-dba Stopping
Container ddev-npg-db Stopping
Container ddev-npg-db Stopping
Container ddev-npg-elasticsearch Stopping
Container ddev-npg-elasticsearch Stopping
Container ddev-npg-elasticsearch Stopped
Container ddev-npg-elasticsearch Removing
Container ddev-npg-elasticsearch Removed
Container ddev-npg-db Stopped
Container ddev-npg-db Removing
Container ddev-npg-db Removed
Container ddev-npg-web Stopped
Container ddev-npg-web Removing
Container ddev-npg-web Removed
Container ddev-npg-dba Stopped
Container ddev-npg-dba Removing
Container ddev-npg-dba Removed
Network ddev-npg_default Removing
Network ddev-npg_default Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints'
Removing container: ddev-npg-web-run-db17b45e5f8c
Removing container: ddev-npg-web-run-501f02f3f63d
Project npg has been stopped.
The ddev-ssh-agent container has been removed. When you start it again you will have to use 'ddev auth ssh' to provide key authentication again.
Existing docker containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Creating a new ddev project config in the current directory (/Users/peter/projects/npg/tryddevproject-9274)
Once completed, your configuration will be written to /Users/peter/projects/npg/tryddevproject-9274/.ddev/config.yaml
Configuring unrecognized codebase as project type 'php' at /Users/peter/projects/npg/tryddevproject-9274/web
Configuration complete. You may now run 'ddev start'.
Network ddev_default created
Starting tryddevproject-9274...
Container ddev-ssh-agent Started
ssh-agent container is running: If you want to add authentication to the ssh-agent container, run 'ddev auth ssh' to enable your keys.
v1.21.5: Pulling from drud/ddev-dbserver-mariadb-10.4
970e18d4d6e7: Already exists
8e1696491692: Already exists
fdaddccffd64: Already exists
3916deba3048: Already exists
1cd2db5c00a0: Pull complete
763136347ea3: Pull complete
9fbbbbe6aadb: Pull complete
3d5e4be2696f: Pull complete
e4554e9c224b: Pull complete
4f4fb700ef54: Pull complete
1dc40240383f: Pull complete
e874df046cc2: Pull complete
86140e227aaa: Pull complete
22ff3eb0b980: Pull complete
9e61d5ae7fe0: Pull complete
a3c19fbcb613: Pull complete
3c5741040871: Pull complete
1ffc539f9ccb: Pull complete
ca0cb63dbcb0: Pull complete
eada3ebe2860: Pull complete
149b49488bc6: Pull complete
8d2045d8a17d: Pull complete
c891587387a1: Pull complete
0be54c4d4ee7: Pull complete
Digest: sha256:5c09a3302f5d77780e7cf2987762aefe2c0e0bcb603cb9624d1cc8624a9c6f30
Status: Downloaded newer image for drud/ddev-dbserver-mariadb-10.4:v1.21.5
docker.io/drud/ddev-dbserver-mariadb-10.4:v1.21.5
Network ddev-tryddevproject-9274_default Created
Container ddev-tryddevproject-9274-web Started
Container ddev-tryddevproject-9274-db Started
Container ddev-tryddevproject-9274-dba Started
Container ddev-router Started
Successfully started tryddevproject-9274
Project can be reached at https://tryddevproject-9274.ddev.site https://127.0.0.1:32783
======== Curl of site from inside container:
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 25 Jul 2023 08:33:01 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
======== curl -I of http://tryddevproject-9274.ddev.site from outside:
HTTP/1.1 200 OK
Server: nginx/1.20.1
Date: Tue, 25 Jul 2023 08:33:01 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
======== full curl of http://tryddevproject-9274.ddev.site from outside:
Success accessing database... db via TCP/IP
ddev is working. You will want to delete this project with 'ddev delete -Oy tryddevproject-9274'
======== Project ownership on host:
drwxr-xr-x 4 peter staff 128 25 Jul 09:32 ../tryddevproject-9274
======== Project ownership in container:
drwxr-xr-x 4 peter dialout 128 Jul 25 08:32 /var/www/html
======== In-container filesystem:
Filesystem Type 1K-blocks Used Available Use% Mounted on
mac virtiofs 971350180 688718612 282631568 71% /var/www/html
======== curl again of tryddevproject-9274 from host:
Success accessing database... db via TCP/IP
ddev is working. You will want to delete this project with 'ddev delete -Oy tryddevproject-9274'
Thanks for running the diagnostic. It was successful.
Please provide the output of this script in a new gist at gist.github.com
Running ddev launch in 5 seconds
If you're brave and you have jq you can delete all tryddevproject instances with this one-liner:
ddev delete -Oy $(ddev list -j |jq -r .raw[].name | grep tryddevproject)
In the future ddev debug test will also provide this option.
Please delete this project after debugging with 'ddev delete -Oy tryddevproject-9274'
Expected Behavior
I have a PHP site running within ddev, and on my host machine I run vite with a default configuration (so bound to localhost:5173).
I’d expect them to run alongside each other, but running Vite’s dev server with HMR seems to kill DDEV.
Actual Behavior
502: Unresponsive/broken ddev back-end site. This is the ddev-router container: The back-end webserver at the URL you specified is not responding. You may want to use “ddev restart” to restart the site.
The network hangs around and if I do a ddev stop
Steps To Reproduce
So, this doesn’t reproduce 100% of the time but over the last couple of days this is the closest I can get to a reproducer.
When the HTTP 502 happens I try the following:
> ddev stop
Container ddev-npg-db Removed
Container ddev-npg-elasticsearch Removed
Container ddev-npg-web Removed
Container ddev-npg-dba Removed
Network ddev-npg_default Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints
Failed to docker-compose down: ComposeCmd failed to run 'COMPOSE_PROJECT_NAME=ddev-npg docker-compose -f /Users/peter/projects/npg/npg_web/.ddev/.ddev-docker-compose-full.yaml down', action='[down]', err='exit status 1', stdout='', stderr='Container ddev-npg-web Stopping
Container ddev-npg-web Stopping
Container ddev-npg-db Stopping
Container ddev-npg-db Stopping
Container ddev-npg-elasticsearch Stopping
Container ddev-npg-elasticsearch Stopping
Container ddev-npg-dba Stopping
Container ddev-npg-dba Stopping
Container ddev-npg-db Stopped
Container ddev-npg-db Removing
Container ddev-npg-db Removed
Container ddev-npg-elasticsearch Stopped
Container ddev-npg-elasticsearch Removing
Container ddev-npg-elasticsearch Removed
Container ddev-npg-web Stopped
Container ddev-npg-web Removing
Container ddev-npg-web Removed
Container ddev-npg-dba Stopped
Container ddev-npg-dba Removing
Container ddev-npg-dba Removed
Network ddev-npg_default Removing
Network ddev-npg_default Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints'
Removing container: ddev-npg-web-run-cead698ab009
Project npg has been stopped.
Trying to restart leads to:
> ddev start
Starting npg...
Using custom mysql configuration: [/Users/peter/projects/npg/npg_web/.ddev/mysql/my-npg.cnf]
Custom configuration is updated on restart.
If you don't see your custom configuration taking effect, run 'ddev restart'.
Container ddev-npg-dba Started
Container ddev-npg-web Started
Container ddev-npg-db Started
Container ddev-npg-elasticsearch Started
Container ddev-router Started
Failed to start npg: container(s) failed to become healthy before their configured timeout or in 120 seconds. This may be just a problem with the healthcheck and not a functional problem. (health check timed out: labels map[com.ddev.site-name:npg] timed out without becoming healthy, status=, detail= ddev-npg-web-run-bd3dd0de6a9a:starting - more info with `docker inspect --format "{{json .State.Health }}" ddev-npg-web-run-bd3dd0de6a9a` )
> docker network ls
NETWORK ID NAME DRIVER SCOPE
de785f219008 bridge bridge local
03c36ecc4635 ddev-npg_default bridge local
0c8e9b9f2705 ddev_default bridge local
d97be534fb66 host host local
b159031a67f9 none null local
> docker network rm ddev-npg_default
Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints
> ddev poweroff
Container ddev-npg-dba Removed
Container ddev-npg-db Removed
Container ddev-npg-elasticsearch Removed
Container ddev-npg-web Removed
Network ddev-npg_default Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints
Failed to docker-compose down: ComposeCmd failed to run 'COMPOSE_PROJECT_NAME=ddev-npg docker-compose -f /Users/peter/projects/npg/npg_web/.ddev/.ddev-docker-compose-full.yaml down', action='[down]', err='exit status 1', stdout='', stderr='Container ddev-npg-web Stopping
Container ddev-npg-elasticsearch Stopping
Container ddev-npg-web Stopping
Container ddev-npg-elasticsearch Stopping
Container ddev-npg-db Stopping
Container ddev-npg-db Stopping
Container ddev-npg-dba Stopping
Container ddev-npg-dba Stopping
Container ddev-npg-web Stopped
Container ddev-npg-web Removing
Container ddev-npg-dba Stopped
Container ddev-npg-dba Removing
Container ddev-npg-elasticsearch Stopped
Container ddev-npg-elasticsearch Removing
Container ddev-npg-db Stopped
Container ddev-npg-db Removing
Container ddev-npg-dba Removed
Container ddev-npg-db Removed
Container ddev-npg-elasticsearch Removed
Container ddev-npg-web Removed
Network ddev-npg_default Removing
Network ddev-npg_default Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints'
Removing container: ddev-npg-web-run-bd3dd0de6a9a
Project npg has been stopped.
The ddev-ssh-agent container has been removed. When you start it again you will have to use 'ddev auth ssh' to provide key authentication again.
> docker network ls
NETWORK ID NAME DRIVER SCOPE
de785f219008 bridge bridge local
03c36ecc4635 ddev-npg_default bridge local
d97be534fb66 host host local
b159031a67f9 none null local
> docker network rm ddev-npg_default
ddev-npg_default
Once I run that I can now run ddev start
and it works.
Now I run npm run dev
again in my subfolder:
VITE v4.4.7 ready in 149 ms
➜ Local: https://localhost:5173/
➜ Network: use --host to expose
➜ press h to show help
Reload my PHP page a couple of times, and ddev goes back to returning the HTTP 502.
Anything else?
I thought it might be a bug in Docker, so I installed OrbStack with this DDEV setup the only set of containers/networks in it. The problem happens just the same, which is why I’m posting here as it could be to do with DDEV instead.
vite.config.ts:
import { defineConfig } from 'vite'
// @ts-ignore
import fs from 'fs'
// @ts-ignore
import path from 'path'
export default defineConfig({
server: {
https: {
key: fs.readFileSync(path.resolve(__dirname, 'localhost-key.pem')),
cert: fs.readFileSync(path.resolve(__dirname, 'localhost.pem'))
}
},
plugins: [
],
css: {
devSourcemap: true
},
build: {
manifest: true,
rollupOptions: {
input: {
main: './main.js',
}
},
outDir: '../htdocs/assets/dist'
}
})
And the minimal parts of my package.json:
{
"dependencies": {
},
"scripts": {
"dev": "rm -rf dist/* && vite",
"build": "vite build",
"preview": "vite preview"
},
"devDependencies": {
"vite": "^4.4.7"
}
}
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 63 (47 by maintainers)
I finally tested @stasadev work today using the latest nightly on ARM64 with OrbStack as the backend, and so far it seems to have fixed the problem. I’ll keep testing.
@pbowyer could you try the artifacts from https://github.com/ddev/ddev/pull/5533#issuecomment-1809202466 ?
I decided that making the project networks external was too intrusive and found a way to make them internal as before, but with the same duplicate check.
I hope this change will have the same effect with a simpler approach.
😁 @stasadev Thank you! 😁
I opened “proj3” in PhpStorm, a project which has always caused trouble. Your patch worked around the problem and
ddev
ran without error. Here’s the output:I would dearly love Docker/OrbStack (which I use)/PhpStorm to stop creating unhealthy containers and duplicate networks, but thanks to you I don’t have to think about it each time I run
ddev
😀@pbowyer thanks for the feedback, I think I understand the problem better now.
I have a few ideas to fix this since I can now control the project network:
ddev poweroff
, look for the network ID, not the network name.CheckDuplicate
option that I can use when creating a network.I will create a PR for this today or tomorrow.
Edit: I decided not to use
CheckDuplicate
as this option will be removed in Docker 25 anyway.I had this problem happen to me again today, using a nightly build. Command outputs slightly edited to hide the company I’m working for:
From this it looks to have also happened to another project, proj2, that I worked on last week but didn’t start the environment for.
Edit: this time
ddev poweroff
wasn’t enough to clear it. I had to resort toThe workaround has been merged into the master branch and will appear in v1.22.5.
Until then anyone can use a nightly link build.
🚀 Congrats.
Looking forward to seeing everybody’s results on this in the time before next point release.
You’re awesome forr tracking that so carefully!
Thanks for spotting that issue @stasadev - I wonder why this doesn’t happen more. The project network could probably be created externally. It makes some things awkward and the docker-compose externally becomes harder (as with PhpStorm’s plugin for ddev) - An experimental PR would be welcome.
Great work studying it!
export DDEV_DEBUG=true
or if desperate and you really want to read lots of stuff
export DDEV_VERBOSE=true
and then you can see exactly what’s going on.
I think your study here is going to get us somewhere… that it may have to do with PhpStorm. When PhpStorm uses the .ddev/.ddev-docker-compose-full.yaml, it’s doing it without any input from DDEV. DDEV thinks it’s already done its job (or perhaps a project hasn’t started yet), but PhpStorm is using raw docker-compose. So maybe that’s our path. Thank you for the great study and debugging, I think we’re going to get somewhere!
With regard to your
ddev start
in a subfolder, I think that’s not the problem. I think you’ve accidentally runddev config
in a subfolder.ddev config
gives a warning in that case, but perhaps you found a way to create a .ddev/config.yaml previously in a subdirectory:but `ddev start definitely knows how to find the correct project, doesn’t get confused about this.
Hiya, I have continued to experiment and write up notes. I have uncovered 2 OrbStack issues but not yet got reproducers to be able to report them to Orbstack. See what you think in case one of them involves ddev.
Issue 1: PhpStorm and Orbstack interplay
This only happens with particular PhpStorm projects. Presumably ones that try to run something inside the Docker Container automatically. It doesn’t involve the ddev plugin as I’ve tried it installed and removed.
What happens is when my ddev project is not running and I open PhpStorm, it does the following:
To fix this problem I can run
ddev stop
which cleans everything up perfectly:This is a nuisance but now I know the fix, I can work around it.
Issue 2: Subfolders, mutagen and project confusion
I only discovered this one late last night so haven’t yet had a chance to reproduce with Docker Desktop to confirm it’s an OrbStack issue (I’ll edit this message when I do)
I had an already run
ddev start
for the project. Icd
’d into a subfolder, selected the wrong command from my history and randdev start
again.This got me into a broken state, because I got (message from a reproducer this morning):
I suspect
ddev destroy
would’ve been the easiest way to fix but I needed to backup the database before running that… oops.I tried a few things and in the end powered off ddev and then manually deleted the volume
npg_project_mutagen
. After thatddev start
worked and I backed up the DB.Then I ran
ddev start
again in a subfolder (what can I say, I was tired). Same thing happened.This time I ran
ddev config --performance-mode none
. Now when I accidentally runddev start
in a subfolder it printsand more importantly, the web container still works afterwards.
Aside: startup time
Even on a successful run the containers start and then there’s a 15s+ pause before the “Successfully started” message is printed. Is there any way I can see what’s going on in that time?
There’s actually nothing wrong with
ddev start
multiple times. I’m just trying to understand if there might be some workflow that triggers this.I sleep/poweroff laptop all the time without even stopping projects. Never have seen this on macOS or WSL2. Obviously it happens!
I just experimented with a few sites and saw only correct behavior with networks being deleted on stop.
Maybe it does have something to do with the vite service, that would seem strange.
Please do occasional
docker network ls
to see if you can catch it in the act.