tsx: Inside a docker container, tsx is no longer working after the container restarted

Acknowledgements

  • I searched existing issues before opening this one to avoid duplicates
  • I understand this is not a place for seek help, but to report a bug
  • I understand that the bug must be proven first with a minimal reproduction
  • I will be polite, respectful, and considerate of people’s time and effort

Minimal reproduction URL

https://github.com/louislam/tsx-issue-reproduce

Version

Tested 4.7.0 and 4.6.2

Node.js version

Tested 20.11.0 and 18.19.0

Package manager

npm

Operating system

Linux

Problem & Expected behavior

Original issue from my project: https://github.com/louislam/dockge/issues/353. My application Dockge is unable to start after the server was restarted for some users.

Reproduce steps

  1. The issue can only be reproduced inside a docker container, I tried on a Ubuntu bare machine, it is working fine. So you need Docker and docker-compose plugin to test it.
  2. Checkout: https://github.com/louislam/tsx-issue-reproduce
  3. Run docker compose build
  4. Run docker compose up
  5. You should see it keeps printing Forever Loop
  6. Ctrl + C to stop the stack
  7. Run docker compose up again
  8. An error like Error: listen EADDRINUSE: address already in use /tmp/tsx-0/26.pipe will be shown and exited with code 1
  9. If you don’t see it, try again docker compose up several time, it should be very high chance to hit the error.

My observation

  • /tmp/ is persistent inside a docker container, all files here will remain after the container restarted
  • PID is usually the same (In my case, it is 26) after restart
  • /tmp/tsx-0/26.pipe is not removed after the container is stopped

Caused by

https://github.com/privatenumber/tsx/blob/985bbb8cff1f750ad02e299874e542b6f63495ef/src/utils/ipc/server.ts#L40

Which I believe it should belong to https://github.com/privatenumber/tsx/pull/413

Error Log

test-tsx-test-tsx-1  | > ls -l /tmp/tsx-0 ; tsx index.ts
test-tsx-test-tsx-1  |
test-tsx-test-tsx-1  | total 4
test-tsx-test-tsx-1  | -rw-r--r-- 1 root root 1172 Jan 17 21:07 17055-91de2c17650b57ed05bae0f65c605f825076ee72
test-tsx-test-tsx-1  | srwxr-xr-x 1 root root    0 Jan 17 21:07 26.pipe
test-tsx-test-tsx-1  | node:internal/errors:563
test-tsx-test-tsx-1  |     ErrorCaptureStackTrace(err);
test-tsx-test-tsx-1  |     ^
test-tsx-test-tsx-1  |
test-tsx-test-tsx-1  | Error: listen EADDRINUSE: address already in use /tmp/tsx-0/26.pipe
test-tsx-test-tsx-1  |     at Server.setupListenHandle [as _listen2] (node:net:1855:21)
test-tsx-test-tsx-1  |     at listenInCluster (node:net:1920:12)
test-tsx-test-tsx-1  |     at Server.listen (node:net:2025:5)
test-tsx-test-tsx-1  |     at file:///test-tsx/node_modules/tsx/dist/cli.mjs:53:31317
test-tsx-test-tsx-1  |     at new Promise (<anonymous>)
test-tsx-test-tsx-1  |     at yn (file:///test-tsx/node_modules/tsx/dist/cli.mjs:53:31295)
test-tsx-test-tsx-1  |     at async file:///test-tsx/node_modules/tsx/dist/cli.mjs:55:459 {
test-tsx-test-tsx-1  |   code: 'EADDRINUSE',
test-tsx-test-tsx-1  |   errno: -98,
test-tsx-test-tsx-1  |   syscall: 'listen',
test-tsx-test-tsx-1  |   address: '/tmp/tsx-0/26.pipe',
test-tsx-test-tsx-1  |   port: -1
test-tsx-test-tsx-1  | }
test-tsx-test-tsx-1  |
test-tsx-test-tsx-1  | Node.js v20.11.0

Contributions

  • I plan to open a pull request for this issue
  • I plan to make a financial contribution to this project

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Reactions: 2
  • Comments: 22 (11 by maintainers)

Most upvoted comments

We are also experiencing this issue of the pipe file not being cleaned up by tsx when the container shuts down. As a workaround one can use the node --import tsx index.ts command and only use tsx as a loader. This works, but you lose the very useful watch part of tsx, so a proper solution would be great 😃

🎉 This issue has been resolved in v4.7.1

If you appreciate this project, please consider supporting this project by sponsoring ❤️ 🙏

Gotcha. I appreciate all the context and use-cases where the socket path could collide.

We could randomize the socket path but the tricky part is communicating what the random pipe path is from the parent to the child.

Will think about this some more, but I welcome any ideas or PRs!

Is the process exiting abruptly?

According to my users, they just normally reboot/shutdown their machine, I assume that Docker will shutdown the container gracefully.

According to my test, docker compose stop or Ctrl + C also won’t trigger the cleanup event.

Upon current info, maybe the real issue is, if process is killed by signals, the process exit event is not triggered and it doesn’t run the cleanup code.

I just checked it on my bare machine without Docker, those files were actually not cleaned up. image

On a bare machine, every time the pid is quite random, it has lower chance to collide.

Inside a container, the pid is always the same (I would say 90%).

Docker environments have the “HOSTNAME” env which is unique per container. I see two ways of resolving this:

  • Making the pipePath a concatenation of hostname and pid (hostname may not exists in “non docker scenarios” but that’s not a problem, since pid is OK in that case).
  • Accepting an input “pipe filename” parameter, so that advanced scenarios like this one can be handled more easily? (ie.: --pipe-filename=$HOSTNAME-$PID.pipe or --pipe-filename=$UUID.pipe)

Oh, I missed the point where multiple containers could use the same volume.

In my opinion, other applications did not handle it either, I saw a lot of applications put their pipe file at /var/run.

If you point /var/run to multiple containers, I believe it will not be working.

Also for reference, pm2 is a global npm package that is used to manage nodejs processes. They put pipe files at /home/<username>/.pm2/pids. Maybe this is a better location than /tmp?

I think it should be fine for the Node process to re-use the existing pipe file though because there can only be one process using the same ID at a time.

@privatenumber This is not always true. In our case we have multiple projects running with tsx inside docker containers and we have to mount the hosts /tmp folder as a volume to the containers /tmp for other reasons. The pid of the tsx process is the same in both docker container which means that only one of the containers can start, because the other one will already find a pipe file using the same pid. That file is for the other tsx process though, so it is not fine to resuse it.

Maybe a random name (uuid?) could be used as the pipe name and in the unlikely scenario of it aready existing a new one could be generated?

IPC server is for the tsx Parent (e.g. watcher) and Child (actual Node.js process) to communicate things like exit signals or watch dependencies.

Is the process exiting abruptly? Sounds like it’s not getting cleaned up: e.g. https://github.com/privatenumber/tsx/blob/985bbb8cff1f750ad02e299874e542b6f63495ef/src/utils/ipc/server.ts#L64

I think it should be fine for the Node process to re-use the existing pipe file though because there can only be one process using the same ID at a time.

Do you mind adding Docker via GitHub actions to your repo: https://github.com/louislam/tsx-issue-reproduce ? I don’t have a Docker env to test it in.