nuxt: [NITRO] WARN [worker] listen EADDRINUSE: address already in use

Environment

------------------------------
- Operating System: `Linux`
- Node Version:     `v17.8.0`
- Nuxt Version:     `3.0.0-27477996.646c2f6`
- Package Manager:  `yarn@3.2.0`
- Builder:          `vite`
- User Config:      `modules`
- Runtime Modules:  `normalizedModule()`, `normalizedModule()`
- Build Modules:    `-`
------------------------------

Reproduction

Hapen randomly

Describe the bug

Sometimes I get this error during development in the console. It seems completely random but often.

Additional context

Nuxt runs in my clean Docker container:

FROM node:17.8.0-alpine3.15 as nuxtBuild

WORKDIR /app

COPY ./client ./

RUN yarn config set npmRegistryServer https://registry.npmmirror.com/ \
    && yarn install \
    && yarn run build

FROM node:17.8.0-alpine3.15

WORKDIR /app

COPY ./client/package.json /app/package.json
COPY ./docker-entrypoint.sh /usr/local/bin/oxy-entrypoint
COPY --from=nuxtBuild /app/.output /app/.output

RUN chmod +x /usr/local/bin/oxy-entrypoint

ENV NUXT_HOST=0.0.0.0

ENTRYPOINT ["oxy-entrypoint"]

Entrypoint:

#!/bin/sh
set -e

if [ "$NODE_ENV" = "production" ]; then
    yarn run start
else
    yarn install
    yarn run dev
fi

Logs

WARN  [worker] listen EADDRINUSE: address already in use /tmp/nitro/worker-74-2.sock
 
   at Server.setupListenHandle [as _listen2] (node:net:1355:21)
   at listenInCluster (node:net:1420:12)
   at Server.listen (node:net:1519:5)
   at .nuxt/nitro/index.mjs:140:8
   at ModuleJob.run (node:internal/modules/esm/module_job:198:25)
   at async Promise.all (index 0)
   at async ESMLoader.import (node:internal/modules/esm/loader:385:24)
   at async loadESM (node:internal/process/esm_loader:88:5)
   at async handleMainPromise (node:internal/modules/run_main:61:12)

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 17
  • Comments: 41 (22 by maintainers)

Commits related to this issue

Most upvoted comments

I have the same issue. Running Docker Compose with the “force recreate” option solved it for me, but that’s just a workaround.

docker-compose up --force-recreate

I changed 1 line in package.json to solve it. At least for now

"dev": "nuxt dev", to "dev": "rm -rf /tmp/nitro && nuxt dev",

Didn’t see anyone mention it but another workaround seems to be to mark the containers /tmp folder to use tmpfs storage so the socket files won’t persist across runs. On docker-compose just add tmpfs: /tmp to your container specification, or when running the container directly add --tmpfs /tmp to the run command.

Guys, I have this issue with docker and this thing is getting boring. Every file change brings this issue, I can not do the hot reload. Is there any progress?

Note: This should be less likely to happen with nitropack migration since we first close old worker and start a new one (instead of parallel replacement) but root cause it not addressed. But I think we can close this issue if not happening frequently and make a reproduction for nitro for future DX improvement.

I get this bug very often in Docker container. 😢

still i have the error in docker

Nuxt CLI v3.0.0-rc.1-27510703.46ecbc5
[nitro] [dev] [uncaughtException] Error: listen EADDRINUSE: address already in use /tmp/nitro/worker-29-2.sock

 ERROR  [worker reload] [worker] exited

  at Worker.<anonymous> (node_modules/nitropack/dist/chunks/prerender.mjs:2039:14)
  at Object.onceWrapper (node:events:510:26)
  at Worker.emit (node:events:390:28)
  at Worker.emit (node:domain:475:12)
  at Worker.[kOnExit] (node:internal/worker:278:10)
  at Worker.<computed>.onexit (node:internal/worker:198:20)

I must delete the /tmp/nitro/worker-1336-2.sock on local, and delete the running Docker Container every time a file change.

Is there any visiblity on this issue ?

got the same issue using Docker with the following dockerfile:

FROM node:16-alpine

RUN apk add --no-cache python3 py3-pip make g++

USER root
RUN npm install -g @vue/cli && \
    npm install -g @vue/cli-init \
    npm install -g nuxi

RUN mkdir /home/node/app

WORKDIR /home/node/app
ADD ./src/package*.json /home/node/app/

CMD /bin/sh

docker-compose:

app:
    container_name: app
    build:
      context: .
      dockerfile: ./development/Dockerfile
    ports:
      - "8024:3000"
      - "24678:24678"
    volumes:
      - ./src:/home/node/app:rw
      - ./src/node_modules:/home/node/app/node_modules
    command: /bin/sh -c "npm install && npm run dev -- -o"
    environment:
      - CHOKIDAR_USEPOLLING=true

Same here, so I choose to just delete the sockets when it happens. Not nice but at least an easy work around.

docker-compose exec nuxt /bin/rm -rf "/tmp/nitro/worker-*"

Let’s track in https://github.com/unjs/nitro/issues/885. Any additional information or minimal nitro reproduction is more than welcome.

Hi @misaon Are you still experiencing this issue with latest edge version and nitropack? (please ping to reopen a new issue in nitro repo if yes!)

Didn’t see anyone mention it but another workaround seems to be to mark the containers /tmp folder to use tmpfs storage so the socket files won’t persist across runs. On docker-compose just add tmpfs: /tmp to your container specification, or when running the container directly add --tmpfs /tmp to the run command.

This worked for me

@vwasteels did you try docker-compose up --force-recreate? When I had the error I did this command ONCE and haven’t had the issue ever since.

This seems to do the trick 😃 Thanks !

Under the hood, recreate container just deletes the /tmp/nitro folder inside container. This workaround can also be done manually without recreate: $ rm -rf /tmp/nitro/worker-*

@amirmms Thanks for sharing