ddev: Multiple ambiguous docker networks created

Is there an existing issue for this?

  • I have searched the existing issues

Output of ddev debug test

Expand `ddev debug test` diagnostic information
Running bash [-c /var/folders/1m/cv08gd016zs66h2p26szn0840000gn/T/test_ddev.sh]
======= Existing project config =========
These config files were loaded for project npg: [/Users/peter/projects/npg/npg_web/.ddev/config.yaml]
name: npg
type: php
docroot: htdocs
php_version: 8.1
webserver_type: apache-fpm
webimage: drud/ddev-webserver:v1.21.5
router_http_port: 80
router_https_port: 443
additional_hostnames: []
additional_fqdns: []
mariadb_version: 10.5
database: {mariadb 10.5}
mailhog_port: 8025
mailhog_https_port: 8026
phpmyadmin_port: 8036
phpmyadmin_https_port: 8037
project_tld: ddev.site
use_dns_when_possible: true
nodejs_version: 16
default_container_timeout: 120
======= Creating dummy project named  tryddevproject-9274 in ../tryddevproject-9274 =========
OS Information: Darwin MBP2021.local 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun  8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000 arm64
ProductName:		macOS
ProductVersion:		13.4.1
BuildVersion:		22F82
User information: uid=501(peter) gid=20(staff) groups=20(staff),12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),701(com.apple.sharepoint.group.1),33(_appstore),100(_lpoperator),204(_developer),250(_analyticsusers),395(com.apple.access_ftp),398(com.apple.access_screensharing),399(com.apple.access_ssh),400(com.apple.access_remote_ae)
DDEV version:  ITEM             VALUE
 DDEV version     v1.21.6
 architecture     arm64
 db               drud/ddev-dbserver-mariadb-10.4:v1.21.5
 dba              phpmyadmin:5
 ddev-ssh-agent   drud/ddev-ssh-agent:v1.21.5
 docker           24.0.4
 docker-compose   v2.15.1
 docker-platform  docker
 mutagen          0.16.0
 os               darwin
 router           drud/ddev-router:v1.21.5
 web              drud/ddev-webserver:v1.21.5
PROXY settings: HTTP_PROXY='' HTTPS_PROXY='' http_proxy='' NO_PROXY=''
======= DDEV global info =========
Global configuration:
instrumentation-opt-in=true
omit-containers=[]
mutagen-enabled=false
nfs-mount-enabled=false
router-bind-all-interfaces=false
internet-detection-timeout-ms=3000
disable-http2=false
use-letsencrypt=false
letsencrypt-email=
table-style=default
simple-formatting=false
auto-restart-containers=false
use-hardened-images=false
fail-on-hook-fail=false
required-docker-compose-version=
use-docker-compose-from-path=false
project-tld=
xdebug-ide-location=
no-bind-mounts=false
use-traefik=false
wsl2-no-windows-hosts-mgt=false

======= DOCKER info =========
docker location: lrwxr-xr-x  1 root  wheel  53 24 Jul 15:42 /usr/local/bin/docker -> /Applications/OrbStack.app/Contents/MacOS/xbin/docker
Docker Desktop Version: Docker Desktop for Mac 4.21.1 build 114176
docker version:
Client:
 Version:           24.0.4
 API version:       1.43
 Go version:        go1.20.5
 Git commit:        3713ee1
 Built:             Fri Jul  7 14:47:27 2023
 OS/Arch:           darwin/arm64
 Context:           default

Server:
 Engine:
  Version:          24.0.4
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.6
  Git commit:       4ffc61430bbe6d3d405bdf357b766bf303ff3cc5
  Built:            Fri Jul 14 13:18:38 2023
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          v1.7.2
  GitCommit:        0cae528dd6cb557f7201036e9f43420650207b58
 runc:
  Version:          1.1.8
  GitCommit:        82f18fe0e44a59034f3e1f45e475fa5636e539aa
 docker-init:
  Version:          0.19.0
  GitCommit:
DOCKER_DEFAULT_PLATFORM=notset
======= Mutagen Info =========
======= Docker Info =========
Docker platform: docker
Using docker context: default (unix:///Users/peter/.orbstack/run/docker.sock)
docker-compose: v2.15.1
Using DOCKER_HOST=unix:///Users/peter/.orbstack/run/docker.sock
Docker version: 24.0.4
Able to run simple container that mounts a volume.
Able to use internet inside container.
Docker disk space:
Filesystem                Size      Used Available Use% Mounted on
overlay                 264.2G      5.5G    258.8G   2% /

Container ddev-npg-elasticsearch  Removed
Container ddev-npg-db  Removed
Container ddev-npg-web  Removed
Container ddev-npg-dba  Removed
Network ddev-npg_default  Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints
Failed to docker-compose down: ComposeCmd failed to run 'COMPOSE_PROJECT_NAME=ddev-npg docker-compose -f /Users/peter/projects/npg/npg_web/.ddev/.ddev-docker-compose-full.yaml down', action='[down]', err='exit status 1', stdout='', stderr='Container ddev-npg-web  Stopping
Container ddev-npg-web  Stopping
Container ddev-npg-dba  Stopping
Container ddev-npg-dba  Stopping
Container ddev-npg-db  Stopping
Container ddev-npg-db  Stopping
Container ddev-npg-elasticsearch  Stopping
Container ddev-npg-elasticsearch  Stopping
Container ddev-npg-elasticsearch  Stopped
Container ddev-npg-elasticsearch  Removing
Container ddev-npg-elasticsearch  Removed
Container ddev-npg-db  Stopped
Container ddev-npg-db  Removing
Container ddev-npg-db  Removed
Container ddev-npg-web  Stopped
Container ddev-npg-web  Removing
Container ddev-npg-web  Removed
Container ddev-npg-dba  Stopped
Container ddev-npg-dba  Removing
Container ddev-npg-dba  Removed
Network ddev-npg_default  Removing
Network ddev-npg_default  Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints'
Removing container: ddev-npg-web-run-db17b45e5f8c
Removing container: ddev-npg-web-run-501f02f3f63d
Project npg has been stopped.
The ddev-ssh-agent container has been removed. When you start it again you will have to use 'ddev auth ssh' to provide key authentication again.
Existing docker containers:
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
Creating a new ddev project config in the current directory (/Users/peter/projects/npg/tryddevproject-9274)
Once completed, your configuration will be written to /Users/peter/projects/npg/tryddevproject-9274/.ddev/config.yaml

Configuring unrecognized codebase as project type 'php' at /Users/peter/projects/npg/tryddevproject-9274/web
Configuration complete. You may now run 'ddev start'.
Network ddev_default created
Starting tryddevproject-9274...
Container ddev-ssh-agent  Started
ssh-agent container is running: If you want to add authentication to the ssh-agent container, run 'ddev auth ssh' to enable your keys.
v1.21.5: Pulling from drud/ddev-dbserver-mariadb-10.4
970e18d4d6e7: Already exists
8e1696491692: Already exists
fdaddccffd64: Already exists
3916deba3048: Already exists
1cd2db5c00a0: Pull complete
763136347ea3: Pull complete
9fbbbbe6aadb: Pull complete
3d5e4be2696f: Pull complete
e4554e9c224b: Pull complete
4f4fb700ef54: Pull complete
1dc40240383f: Pull complete
e874df046cc2: Pull complete
86140e227aaa: Pull complete
22ff3eb0b980: Pull complete
9e61d5ae7fe0: Pull complete
a3c19fbcb613: Pull complete
3c5741040871: Pull complete
1ffc539f9ccb: Pull complete
ca0cb63dbcb0: Pull complete
eada3ebe2860: Pull complete
149b49488bc6: Pull complete
8d2045d8a17d: Pull complete
c891587387a1: Pull complete
0be54c4d4ee7: Pull complete
Digest: sha256:5c09a3302f5d77780e7cf2987762aefe2c0e0bcb603cb9624d1cc8624a9c6f30
Status: Downloaded newer image for drud/ddev-dbserver-mariadb-10.4:v1.21.5
docker.io/drud/ddev-dbserver-mariadb-10.4:v1.21.5
Network ddev-tryddevproject-9274_default  Created
Container ddev-tryddevproject-9274-web  Started
Container ddev-tryddevproject-9274-db  Started
Container ddev-tryddevproject-9274-dba  Started
Container ddev-router  Started
Successfully started tryddevproject-9274
Project can be reached at https://tryddevproject-9274.ddev.site https://127.0.0.1:32783
======== Curl of site from inside container:
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 25 Jul 2023 08:33:01 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding

======== curl -I of http://tryddevproject-9274.ddev.site from outside:
HTTP/1.1 200 OK
Server: nginx/1.20.1
Date: Tue, 25 Jul 2023 08:33:01 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding

======== full curl of http://tryddevproject-9274.ddev.site from outside:
Success accessing database... db via TCP/IP
ddev is working. You will want to delete this project with 'ddev delete -Oy tryddevproject-9274'
======== Project ownership on host:
drwxr-xr-x  4 peter  staff  128 25 Jul 09:32 ../tryddevproject-9274
======== Project ownership in container:
drwxr-xr-x 4 peter dialout 128 Jul 25 08:32 /var/www/html
======== In-container filesystem:
Filesystem     Type     1K-blocks      Used Available Use% Mounted on
mac            virtiofs 971350180 688718612 282631568  71% /var/www/html
======== curl again of tryddevproject-9274 from host:
Success accessing database... db via TCP/IP
ddev is working. You will want to delete this project with 'ddev delete -Oy tryddevproject-9274'
Thanks for running the diagnostic. It was successful.
Please provide the output of this script in a new gist at gist.github.com
Running ddev launch in 5 seconds
If you're brave and you have jq you can delete all tryddevproject instances with this one-liner:
    ddev delete -Oy $(ddev list -j |jq -r .raw[].name | grep tryddevproject)
In the future ddev debug test will also provide this option.

Please delete this project after debugging with 'ddev delete -Oy tryddevproject-9274'

Expected Behavior

I have a PHP site running within ddev, and on my host machine I run vite with a default configuration (so bound to localhost:5173).

I’d expect them to run alongside each other, but running Vite’s dev server with HMR seems to kill DDEV.

Actual Behavior

502: Unresponsive/broken ddev back-end site. This is the ddev-router container: The back-end webserver at the URL you specified is not responding. You may want to use “ddev restart” to restart the site.

The network hangs around and if I do a ddev stop

Steps To Reproduce

So, this doesn’t reproduce 100% of the time but over the last couple of days this is the closest I can get to a reproducer.

When the HTTP 502 happens I try the following:

> ddev stop
Container ddev-npg-db  Removed
Container ddev-npg-elasticsearch  Removed
Container ddev-npg-web  Removed
Container ddev-npg-dba  Removed
Network ddev-npg_default  Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints
Failed to docker-compose down: ComposeCmd failed to run 'COMPOSE_PROJECT_NAME=ddev-npg docker-compose -f /Users/peter/projects/npg/npg_web/.ddev/.ddev-docker-compose-full.yaml down', action='[down]', err='exit status 1', stdout='', stderr='Container ddev-npg-web  Stopping
Container ddev-npg-web  Stopping
Container ddev-npg-db  Stopping
Container ddev-npg-db  Stopping
Container ddev-npg-elasticsearch  Stopping
Container ddev-npg-elasticsearch  Stopping
Container ddev-npg-dba  Stopping
Container ddev-npg-dba  Stopping
Container ddev-npg-db  Stopped
Container ddev-npg-db  Removing
Container ddev-npg-db  Removed
Container ddev-npg-elasticsearch  Stopped
Container ddev-npg-elasticsearch  Removing
Container ddev-npg-elasticsearch  Removed
Container ddev-npg-web  Stopped
Container ddev-npg-web  Removing
Container ddev-npg-web  Removed
Container ddev-npg-dba  Stopped
Container ddev-npg-dba  Removing
Container ddev-npg-dba  Removed
Network ddev-npg_default  Removing
Network ddev-npg_default  Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints'
Removing container: ddev-npg-web-run-cead698ab009
Project npg has been stopped.

Trying to restart leads to:

> ddev start
Starting npg...
Using custom mysql configuration: [/Users/peter/projects/npg/npg_web/.ddev/mysql/my-npg.cnf]
Custom configuration is updated on restart.
If you don't see your custom configuration taking effect, run 'ddev restart'.
Container ddev-npg-dba  Started
Container ddev-npg-web  Started
Container ddev-npg-db  Started
Container ddev-npg-elasticsearch  Started
Container ddev-router  Started
Failed to start npg: container(s) failed to become healthy before their configured timeout or in 120 seconds. This may be just a problem with the healthcheck and not a functional problem. (health check timed out: labels map[com.ddev.site-name:npg] timed out without becoming healthy, status=, detail= ddev-npg-web-run-bd3dd0de6a9a:starting - more info with `docker inspect --format "{{json .State.Health }}" ddev-npg-web-run-bd3dd0de6a9a` )
> docker network ls
NETWORK ID     NAME               DRIVER    SCOPE
de785f219008   bridge             bridge    local
03c36ecc4635   ddev-npg_default   bridge    local
0c8e9b9f2705   ddev_default       bridge    local
d97be534fb66   host               host      local
b159031a67f9   none               null      local

> docker network rm ddev-npg_default
Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints

> ddev poweroff
Container ddev-npg-dba  Removed
Container ddev-npg-db  Removed
Container ddev-npg-elasticsearch  Removed
Container ddev-npg-web  Removed
Network ddev-npg_default  Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints
Failed to docker-compose down: ComposeCmd failed to run 'COMPOSE_PROJECT_NAME=ddev-npg docker-compose -f /Users/peter/projects/npg/npg_web/.ddev/.ddev-docker-compose-full.yaml down', action='[down]', err='exit status 1', stdout='', stderr='Container ddev-npg-web  Stopping
Container ddev-npg-elasticsearch  Stopping
Container ddev-npg-web  Stopping
Container ddev-npg-elasticsearch  Stopping
Container ddev-npg-db  Stopping
Container ddev-npg-db  Stopping
Container ddev-npg-dba  Stopping
Container ddev-npg-dba  Stopping
Container ddev-npg-web  Stopped
Container ddev-npg-web  Removing
Container ddev-npg-dba  Stopped
Container ddev-npg-dba  Removing
Container ddev-npg-elasticsearch  Stopped
Container ddev-npg-elasticsearch  Removing
Container ddev-npg-db  Stopped
Container ddev-npg-db  Removing
Container ddev-npg-dba  Removed
Container ddev-npg-db  Removed
Container ddev-npg-elasticsearch  Removed
Container ddev-npg-web  Removed
Network ddev-npg_default  Removing
Network ddev-npg_default  Error
failed to remove network ddev-npg_default: Error response from daemon: error while removing network: network ddev-npg_default id 03c36ecc4635f7146f09cb9181b3c8db1f98b3c0583e1039aed6e3df3fdc9649 has active endpoints'
Removing container: ddev-npg-web-run-bd3dd0de6a9a
Project npg has been stopped.
The ddev-ssh-agent container has been removed. When you start it again you will have to use 'ddev auth ssh' to provide key authentication again.

> docker network ls
NETWORK ID     NAME               DRIVER    SCOPE
de785f219008   bridge             bridge    local
03c36ecc4635   ddev-npg_default   bridge    local
d97be534fb66   host               host      local
b159031a67f9   none               null      local

> docker network rm ddev-npg_default
ddev-npg_default

Once I run that I can now run ddev start and it works.

Now I run npm run dev again in my subfolder:

  VITE v4.4.7  ready in 149 ms

  ➜  Local:   https://localhost:5173/
  ➜  Network: use --host to expose
  ➜  press h to show help

Reload my PHP page a couple of times, and ddev goes back to returning the HTTP 502.

Anything else?

I thought it might be a bug in Docker, so I installed OrbStack with this DDEV setup the only set of containers/networks in it. The problem happens just the same, which is why I’m posting here as it could be to do with DDEV instead.

vite.config.ts:

import { defineConfig } from 'vite'
// @ts-ignore
import fs from 'fs'
// @ts-ignore
import path from 'path'

export default defineConfig({
    server: {
        https: {
            key: fs.readFileSync(path.resolve(__dirname, 'localhost-key.pem')),
            cert: fs.readFileSync(path.resolve(__dirname, 'localhost.pem'))
        }
    },
    plugins: [
    ],
    css: {
        devSourcemap: true
    },
    build: {
        manifest: true,
        rollupOptions: {
            input: {
                main: './main.js',
            }
        },
        outDir: '../htdocs/assets/dist'
    }
})

And the minimal parts of my package.json:

{
  "dependencies": {
  },
  "scripts": {
    "dev": "rm -rf dist/* && vite",
    "build": "vite build",
    "preview": "vite preview"
  },
  "devDependencies": {
    "vite": "^4.4.7"
  }
}

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 63 (47 by maintainers)

Most upvoted comments

I finally tested @stasadev work today using the latest nightly on ARM64 with OrbStack as the backend, and so far it seems to have fixed the problem. I’ll keep testing.

@pbowyer could you try the artifacts from https://github.com/ddev/ddev/pull/5533#issuecomment-1809202466 ?

I decided that making the project networks external was too intrusive and found a way to make them internal as before, but with the same duplicate check.

I hope this change will have the same effect with a simpler approach.

😁 @stasadev Thank you! 😁

I opened “proj3” in PhpStorm, a project which has always caused trouble. Your patch worked around the problem and ddev ran without error. Here’s the output:

peter@MBP2021 ~/p/n/proj3_web (develop)> docker ps -a
CONTAINER ID   IMAGE                                                    COMMAND                  CREATED         STATUS                    PORTS                                                                                                                                              NAMES
8674a91d7e39   ddev/ddev-webserver:v1.22.3-proj3-built                  "/pre-start.sh"          6 minutes ago   Created                   8025/tcp, 127.0.0.1:32873->80/tcp, 127.0.0.1:32872->443/tcp                                                                                        ddev-proj3-web
8db96985647b   ddev/ddev-webserver:v1.22.3-proj2-built                  "/pre-start.sh"          26 hours ago    Up 26 hours (unhealthy)   8025/tcp, 127.0.0.1:32871->80/tcp, 127.0.0.1:32870->443/tcp                                                                                        ddev-proj2-web
f3bb7f78119b   ddev/ddev-traefik-router:v1.22.4                         "/entrypoint.sh --co…"   32 hours ago    Up 32 hours (healthy)     127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:8025-8026->8025-8026/tcp, 127.0.0.1:9200-9201->9200-9201/tcp, 127.0.0.1:10999->10999/tcp   ddev-router
da7e66ff0873   ddev/ddev-webserver:20231030_apotek_xsl-proj1-built   "/pre-start.sh"          32 hours ago    Up 32 hours (healthy)     8025/tcp, 127.0.0.1:32866->80/tcp, 127.0.0.1:32865->443/tcp                                                                                        ddev-proj1-web
5a8880af8276   ddev/ddev-dbserver-mysql-5.7:v1.22.4-proj1-built      "/docker-entrypoint.…"   32 hours ago    Up 32 hours (healthy)     127.0.0.1:32867->3306/tcp                                                                                                                          ddev-proj1-db
c40fb6f697bb   elasticsearch:7.17.6                                     "/bin/tini -- /usr/l…"   32 hours ago    Up 32 hours (healthy)     9200/tcp, 9300/tcp                                                                                                                                 ddev-proj1-elasticsearch
522cbee3de8c   ddev/ddev-ssh-agent:v1.22.4-built                        "/entry.sh ssh-agent"    2 days ago      Up 2 days (healthy)                                                                                                                                                          ddev-ssh-agent
f3fde3ba6d5f   422dec05bb87                                             "/bin/sh"                3 months ago    Created                                                                                                                                                                      phpstorm_helpers_PS-231.9225.18

peter@MBP2021 ~/p/n/proj3_web (develop)> docker network ls
NETWORK ID     NAME                    DRIVER    SCOPE
c9892499e5ef   bridge                  bridge    local
fe39b2ee0a55   ddev-proj2_default      bridge    local
ffda021408e2   ddev-proj2_default      bridge    local
c80a7912b917   ddev-proj2_default      bridge    local
44f371fa1e96   ddev-proj1_default      bridge    local
7f2967d38e63   ddev-proj3_default      bridge    local
2fb6fb814c36   ddev-proj3_default      bridge    local
263458a8b5ab   ddev-proj3_default      bridge    local
25d9f1effd01   ddev-proj3_default      bridge    local
2659954b4d51   ddev-proj3_default      bridge    local
b69e5054a088   ddev-proj3_default      bridge    local
15540db6781a   ddev-proj3_default      bridge    local
b71d97dac547   ddev-proj3_default      bridge    local
e426cb88273e   ddev-proj3_default      bridge    local
ce4f168932e0   ddev-proj3_default      bridge    local
721ef5822b25   ddev-proj3_default      bridge    local
657963eb0e1f   ddev-proj3_default      bridge    local
cf621c9c476d   ddev-proj3_default      bridge    local
7431a6e5d81b   ddev-proj3_default      bridge    local
3f698f1f2e8e   ddev-proj3_default      bridge    local
12dc5898b6c0   ddev-proj3_default      bridge    local
4fd863eaf1d5   ddev_default            bridge    local
d97be534fb66   host                    host      local
b159031a67f9   none                    null      local

peter@MBP2021 ~/p/n/proj3_web (develop)> ddev -v
ddev version v1.22.5-alpha1-4-g8ccbe6ba1

peter@MBP2021 ~/p/n/proj3_web (develop)> ddev start

 TIP OF THE DAY
 `ddev snapshot -a` will back up all your project databases.

Starting proj3...
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Network ddev-proj3_default removed
Using custom MySQL configuration: [/Users/peter/projects/proj3/proj3_web/.ddev/mysql/my-proj3.cnf]
Using custom PHP configuration: [/Users/peter/projects/proj3/proj3_web/.ddev/php/proj3-php.ini]
Custom configuration is updated on restart.
If you don't see your custom configuration taking effect, run 'ddev restart'.
v1.22.4: Pulling from ddev/ddev-dbserver-mariadb-10.5
[snip]
Status: Downloaded newer image for ddev/ddev-dbserver-mariadb-10.5:v1.22.4
docker.io/ddev/ddev-dbserver-mariadb-10.5:v1.22.4
Building project images...
Project images built in 1s.
 Container ddev-proj3-web  Created
 Container ddev-proj3-elasticsearch  Created
 Container ddev-proj3-db  Created
 Container ddev-proj3-elasticsearch  Started
 Container ddev-proj3-db  Started
 Container ddev-proj3-web  Started
You have Mutagen enabled and your 'php' project type doesn't have `upload_dirs` set.
For faster startup and less disk usage, set upload_dirs to where your user-generated files are stored.
If this is intended you can disable this warning with `ddev config --disable-upload-dirs-warning`.
Starting Mutagen sync process...
.......................................Mutagen sync flush completed in 40s.
For details on sync status 'ddev mutagen st proj3 -l'
Waiting for web/db containers to become ready: [web db]
Starting ddev-router if necessary...
 Container ddev-router  Running
Waiting for additional project containers to become ready...
All project containers are now ready.
Successfully started proj3
Project can be reached at https://proj3.ddev.site https://127.0.0.1:32872
Instrumentation is opted in, but AmplitudeAPIKey is not available. This usually means you have a locally-built ddev binary or one from a PR build. It's not an error. Please report it if you're using an official release build.

I would dearly love Docker/OrbStack (which I use)/PhpStorm to stop creating unhealthy containers and duplicate networks, but thanks to you I don’t have to think about it each time I run ddev 😀

@pbowyer thanks for the feedback, I think I understand the problem better now.

I have a few ideas to fix this since I can now control the project network:

  1. When removing a network in ddev poweroff, look for the network ID, not the network name.
  2. There is a CheckDuplicate option that I can use when creating a network.

I will create a PR for this today or tomorrow.

Edit: I decided not to use CheckDuplicate as this option will be removed in Docker 25 anyway.

I had this problem happen to me again today, using a nightly build. Command outputs slightly edited to hide the company I’m working for:

peter@MBP2021 ~/p/proj1 (master)> ddev start

 TIP OF THE DAY
 `ddev npm` is the right way to run npm commands in your web container.

Starting proj1...
Using custom apache configuration in /Users/peter/projects/proj1/.ddev/apache/apache-site.conf
Using custom MySQL configuration: [/Users/peter/projects/proj1/.ddev/mysql/sql_mode.cnf]
Custom configuration is updated on restart.
If you don't see your custom configuration taking effect, run 'ddev restart'.
Building project images...
.....................Project images built in 30s.
multiple networks with name "ddev-proj1_default" were found. Use network ID as `name` to avoid ambiguity
Failed to start proj1: composeCmd failed to run 'COMPOSE_PROJECT_NAME=ddev-proj1 docker-compose -f /Users/peter/projects/proj1/.ddev/.ddev-docker-compose-full.yaml up -d', action='[]', err='exit status 1', stdout='', stderr='multiple networks with name "ddev-proj1_default" were found. Use network ID as `name` to avoid ambiguity'

peter@MBP2021 ~/p/proj1 (master) [1]> ddev -v
ddev version v1.22.4-14-gd8ebc7d12

peter@MBP2021 ~/p/proj1 (master)> docker network ls
NETWORK ID     NAME                            DRIVER    SCOPE
c9892499e5ef   bridge                          bridge    local
c5b138ab8ffd   ddev-proj2_default            bridge    local
83d2df694367   ddev-proj2_default            bridge    local
3c1d2c1ba6d8   ddev-proj2_default            bridge    local
33073972c2e8   ddev-proj1_default   bridge    local
7b06ed1c13bd   ddev-proj1_default   bridge    local
50ad5c8cf7fe   ddev-proj1_default   bridge    local
5a70857db3ae   ddev-proj1_default   bridge    local
52fefa2aacbb   ddev-proj1_default   bridge    local
478d09ae9cae   ddev-proj1_default   bridge    local
5c5364690a4f   ddev_default                    bridge    local
d97be534fb66   host                            host      local
b159031a67f9   none                            null      local

peter@MBP2021 ~/p/proj1 (master)> docker ps -a
CONTAINER ID   IMAGE                                       COMMAND                  CREATED        STATUS                PORTS                                                                                                                                              NAMES
96c1137e8a97   ddev/ddev-webserver:v1.22.3-proj2-built   "/pre-start.sh"          4 days ago     Created                                                                                                                                                                  ddev-proj2-web
afbdaa838b9e   ddev/ddev-traefik-router:v1.22.4            "/entrypoint.sh --co…"   6 days ago     Up 6 days (healthy)   127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:8025-8026->8025-8026/tcp, 127.0.0.1:9200-9201->9200-9201/tcp, 127.0.0.1:10999->10999/tcp   ddev-router
f4a8531b711e   ddev/ddev-ssh-agent:v1.22.4-built           "/entry.sh ssh-agent"    6 days ago     Up 6 days (healthy)                                                                                                                                                      ddev-ssh-agent
f3fde3ba6d5f   422dec05bb87                                "/bin/sh"                3 months ago   Created                                                                                                                                                                  phpstorm_helpers_PS-231.9225.18

From this it looks to have also happened to another project, proj2, that I worked on last week but didn’t start the environment for.

Edit: this time ddev poweroff wasn’t enough to clear it. I had to resort to

docker network ls | grep ddev-proj1_default | awk '{ print $1; }' | xargs docker network rm

The workaround has been merged into the master branch and will appear in v1.22.5.

Until then anyone can use a nightly link build.

🚀 Congrats.

Looking forward to seeing everybody’s results on this in the time before next point release.

You’re awesome forr tracking that so carefully!

Thanks for spotting that issue @stasadev - I wonder why this doesn’t happen more. The project network could probably be created externally. It makes some things awkward and the docker-compose externally becomes harder (as with PhpStorm’s plugin for ddev) - An experimental PR would be welcome.

Great work studying it!

Even on a successful run the containers start and then there’s a 15s+ pause before the “Successfully started” message is printed. Is there any way I can see what’s going on in that time?

export DDEV_DEBUG=true

or if desperate and you really want to read lots of stuff

export DDEV_VERBOSE=true

and then you can see exactly what’s going on.

I think your study here is going to get us somewhere… that it may have to do with PhpStorm. When PhpStorm uses the .ddev/.ddev-docker-compose-full.yaml, it’s doing it without any input from DDEV. DDEV thinks it’s already done its job (or perhaps a project hasn’t started yet), but PhpStorm is using raw docker-compose. So maybe that’s our path. Thank you for the great study and debugging, I think we’re going to get somewhere!

With regard to your ddev start in a subfolder, I think that’s not the problem. I think you’ve accidentally run ddev config in a subfolder. ddev config gives a warning in that case, but perhaps you found a way to create a .ddev/config.yaml previously in a subdirectory:

rfay@rfay-tag1-m1:~/workspace/d10/junk$ ddev config --auto
It usually does not make sense to `ddev config` in a subdirectory of an existing project. Is it possible you wanted to `ddev config` in parent directory /Users/rfay/workspace/d10?

but `ddev start definitely knows how to find the correct project, doesn’t get confused about this.

Hiya, I have continued to experiment and write up notes. I have uncovered 2 OrbStack issues but not yet got reproducers to be able to report them to Orbstack. See what you think in case one of them involves ddev.

Issue 1: PhpStorm and Orbstack interplay

This only happens with particular PhpStorm projects. Presumably ones that try to run something inside the Docker Container automatically. It doesn’t involve the ddev plugin as I’ve tried it installed and removed.

What happens is when my ddev project is not running and I open PhpStorm, it does the following:

docker ps -a
CONTAINER ID   IMAGE                               COMMAND                 CREATED        STATUS                  PORTS     NAMES
dbcd138aabc7   ddev/ddev-ssh-agent:v1.22.0-built   "/entry.sh ssh-agent"   12 hours ago   Up 12 hours (healthy)             ddev-ssh-agent
f3fde3ba6d5f   422dec05bb87                        "/bin/sh"               11 days ago    Created                           phpstorm_helpers_PS-231.9225.18

> docker network ls
NETWORK ID     NAME           DRIVER    SCOPE
ccb41cd50c43   bridge         bridge    local
012df10b2e38   ddev_default   bridge    local
d97be534fb66   host           host      local
b159031a67f9   none           null      local

### I open my PhpStorm project

> docker ps -a
CONTAINER ID   IMAGE                                   COMMAND                 CREATED         STATUS                  PORTS     NAMES
16df75968a0e   ddev/ddev-webserver:v1.22.0-npg-built   "/pre-start.sh"         4 seconds ago   Created                           ddev-npg-web
dbcd138aabc7   ddev/ddev-ssh-agent:v1.22.0-built       "/entry.sh ssh-agent"   12 hours ago    Up 12 hours (healthy)             ddev-ssh-agent
f3fde3ba6d5f   422dec05bb87                            "/bin/sh"               11 days ago     Created                           phpstorm_helpers_PS-231.9225.18

> docker network ls
NETWORK ID     NAME               DRIVER    SCOPE
ccb41cd50c43   bridge             bridge    local
5954c8413085   ddev-npg_default   bridge    local
32eeba2c017d   ddev-npg_default   bridge    local
5942e2cae323   ddev-npg_default   bridge    local
cadcd530431b   ddev-npg_default   bridge    local
cbca75a1f7ae   ddev-npg_default   bridge    local
5c507064878d   ddev-npg_default   bridge    local
62e74d9579c1   ddev-npg_default   bridge    local
93129e5da6a4   ddev-npg_default   bridge    local
66ecd98e027a   ddev-npg_default   bridge    local
cb46373f3e67   ddev-npg_default   bridge    local
2eba9c1129cb   ddev-npg_default   bridge    local
9d8c1f425c46   ddev-npg_default   bridge    local
63fad463bf4d   ddev-npg_default   bridge    local
c1eb0f28b8e9   ddev-npg_default   bridge    local
bc6e4aa78f88   ddev-npg_default   bridge    local
63ac3f654513   ddev-npg_default   bridge    local
012df10b2e38   ddev_default       bridge    local
d97be534fb66   host               host      local
b159031a67f9   none               null      local

To fix this problem I can run ddev stop which cleans everything up perfectly:

> ddev stop
Project npg is already stopped.
 Container ddev-npg-web  Stopped
 Container ddev-npg-web  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
 Network ddev-npg_default  Removed
Project npg has been stopped.

> docker ps -a
CONTAINER ID   IMAGE                               COMMAND                 CREATED        STATUS                  PORTS     NAMES
dbcd138aabc7   ddev/ddev-ssh-agent:v1.22.0-built   "/entry.sh ssh-agent"   12 hours ago   Up 12 hours (healthy)             ddev-ssh-agent
f3fde3ba6d5f   422dec05bb87                        "/bin/sh"               11 days ago    Created                           phpstorm_helpers_PS-231.9225.18

> docker network ls
NETWORK ID     NAME           DRIVER    SCOPE
ccb41cd50c43   bridge         bridge    local
012df10b2e38   ddev_default   bridge    local
d97be534fb66   host           host      local
b159031a67f9   none           null      local

This is a nuisance but now I know the fix, I can work around it.

Issue 2: Subfolders, mutagen and project confusion

I only discovered this one late last night so haven’t yet had a chance to reproduce with Docker Desktop to confirm it’s an OrbStack issue (I’ll edit this message when I do)

I had an already run ddev start for the project. I cd’d into a subfolder, selected the wrong command from my history and ran ddev start again.

This got me into a broken state, because I got (message from a reproducer this morning):

Starting npg...
Using custom mysql configuration: [/Users/peter/npg/.ddev/mysql/my-npg.cnf]
Custom configuration is updated on restart.
If you don't see your custom configuration taking effect, run 'ddev restart'.
 Container ddev-npg-elasticsearch  Running
 Container ddev-npg-web  Running
 Container ddev-npg-db  Recreate
 Container ddev-npg-db  Recreated
 Container ddev-npg-db  Started
Failed waiting for web/db containers to become ready: web container failed: log=[{2023-08-06 09:49:05.740120147 +0000 UTC 2023-08-06 09:49:05.817996948 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:06.826370476 +0000 UTC 2023-08-06 09:49:06.894583754 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:07.898742279 +0000 UTC 2023-08-06 09:49:07.931850604 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:08.935492139 +0000 UTC 2023-08-06 09:49:08.958825234 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:09.961078338 +0000 UTC 2023-08-06 09:49:09.991694085 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED }], err=container /ddev-npg-web-run-5aec3df7b5c9 unhealthy: [{2023-08-06 09:49:05.740120147 +0000 UTC 2023-08-06 09:49:05.817996948 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:06.826370476 +0000 UTC 2023-08-06 09:49:06.894583754 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:07.898742279 +0000 UTC 2023-08-06 09:49:07.931850604 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:08.935492139 +0000 UTC 2023-08-06 09:49:08.958825234 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:09.961078338 +0000 UTC 2023-08-06 09:49:09.991694085 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED }]
 Container ddev-router  Running
Failed to start npg: container(s) failed to become healthy before their configured timeout or in 120 seconds. This may be just a problem with the healthcheck and not a functional problem. (container /ddev-npg-web-run-5aec3df7b5c9 is unhealthy: [{2023-08-06 09:49:07.898742279 +0000 UTC 2023-08-06 09:49:07.931850604 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:08.935492139 +0000 UTC 2023-08-06 09:49:08.958825234 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:09.961078338 +0000 UTC 2023-08-06 09:49:09.991694085 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:10.996079982 +0000 UTC 2023-08-06 09:49:11.023543498 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED } {2023-08-06 09:49:12.028005809 +0000 UTC 2023-08-06 09:49:12.098777246 +0000 UTC 1 /var/www/html:OK mailhog:FAILED phpstatus:FAILED }])

I suspect ddev destroy would’ve been the easiest way to fix but I needed to backup the database before running that… oops.

I tried a few things and in the end powered off ddev and then manually deleted the volume npg_project_mutagen. After that ddev start worked and I backed up the DB.

Then I ran ddev start again in a subfolder (what can I say, I was tired). Same thing happened.

This time I ran ddev config --performance-mode none. Now when I accidentally run ddev start in a subfolder it prints

Starting npg...
Using custom mysql configuration: [/Users/peter/npg/.ddev/mysql/my-npg.cnf]
Custom configuration is updated on restart.
If you don't see your custom configuration taking effect, run 'ddev restart'.
 Container ddev-npg-elasticsearch  Running
 Container ddev-npg-web  Running
 Container ddev-npg-db  Recreate
 Container ddev-npg-db  Recreated
 Container ddev-npg-db  Started
 Container ddev-router  Running
Successfully started npg
Project can be reached at https://npg.ddev.site https://127.0.0.1:32802

and more importantly, the web container still works afterwards.

Aside: startup time

Even on a successful run the containers start and then there’s a 15s+ pause before the “Successfully started” message is printed. Is there any way I can see what’s going on in that time?

There’s actually nothing wrong with ddev start multiple times. I’m just trying to understand if there might be some workflow that triggers this.

I sleep/poweroff laptop all the time without even stopping projects. Never have seen this on macOS or WSL2. Obviously it happens!

I just experimented with a few sites and saw only correct behavior with networks being deleted on stop.

Maybe it does have something to do with the vite service, that would seem strange.

Please do occasional docker network ls to see if you can catch it in the act.