tailscale: Code path in `tailscale cert` does not respect `--socket`?
What is the issue?
Running tailscale cert on the Tailscale docker container causes a “failed to connect” error.
$ docker exec tailscaled-${SERVICE_NAME} tailscale cert ${MY_HOSTNAME}.${MY_TS_TLS_DOMAIN}
Failed to connect to local Tailscale daemon for /localapi/v0/cert/<REDACTED>; not running? Error: dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory
The same result occurs when running the command from a Bourne shell inside the Docker container, via a docker exec -ti tailscaled-${SERVICE_NAME} /bin/sh shell.
Passing the tailscale --socket /tmp/tailscaled.sock flag did not fix the issue. (/tmp/tailscaled.sock is where the Docker image keeps the socket, which I discovered looking at ps aux inside the container.)
What did fix the issue was the following:
docker exec tailscaled-${SERVICE_NAME} mkdir -p /var/run/tailscale
docker exec tailscaled-${SERVICE_NAME} ln -s /tmp/tailscaled.sock /var/run/tailscale/tailscaled.sock
and then proceeding as above.
I’m no Gopher, so I can’t provide a PR to your code, but I suspect that there’s a code path specific to the tailscale cert operation that doesn’t respect the --socket flag.
Steps to reproduce
$ docker exec tailscaled-${SERVICE_NAME} tailscale cert ${MY_HOSTNAME}.${MY_TS_TLS_DOMAIN}
Failed to connect to local Tailscale daemon for /localapi/v0/cert/<REDACTED>; not running? Error: dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory
The same result occurs when running the command from a Bourne shell inside the Docker container, via a docker exec -ti tailscaled-${SERVICE_NAME} /bin/sh shell.
Are there any recent changes that introduced the issue?
Not aware of any.
OS
Linux, Other
OS version
Debian 12
Tailscale version
1.50.1
Other software
No response
Bug report
BUG-0143464cecfc011a2bbdd6d98048e94e3da96c1d38d8365f7a3ba58aa918ee6c-20231031120959Z-e28357028ea48321
About this issue
- Original URL
- State: closed
- Created 8 months ago
- Comments: 39 (19 by maintainers)
Commits related to this issue
- cmd/containerboot: symlink TS_SOCKET to socket expected by CLI `tailscaled` and `tailscale` expect the socket to be at `/var/run/tailscale/tailscaled.sock`, however containerboot would set up the soc... — committed to tailscale/tailscale by maisem 7 months ago
- cmd/containerboot: symlink TS_SOCKET to socket expected by CLI `tailscaled` and `tailscale` expect the socket to be at `/var/run/tailscale/tailscaled.sock`, however containerboot would set up the soc... — committed to tailscale/tailscale by maisem 7 months ago
- cmd/containerboot: symlink TS_SOCKET to socket expected by CLI `tailscaled` and `tailscale` expect the socket to be at `/var/run/tailscale/tailscaled.sock`, however containerboot would set up the soc... — committed to tailscale/tailscale by maisem 7 months ago
- cmd/containerboot: symlink TS_SOCKET to socket expected by CLI `tailscaled` and `tailscale` expect the socket to be at `/var/run/tailscale/tailscaled.sock`, however containerboot would set up the soc... — committed to tailscale/tailscale by maisem 7 months ago
@flimamacedo I’ve created a separate issue for resolving Tailscale FQDNs in cluster https://github.com/tailscale/tailscale/issues/10499. I’ve also added a possible workaround to that issue.
OK, do let us know if it does not work.
We are planning to make Tailscale FQDNs resolvable from within cluster too (cc @knyar ), but are currently looking for more use cases for this functionality.
(It might be that you are hitting https://github.com/tailscale/tailscale/issues/10027#issuecomment-1827763853, however I saw that your setup was working previously (?) so I assumed that you were somehow able to expose Dex with a kube DNS name)
The issue was fixed after adding the missing metadata value (namespace). In my case, my ingress was defined this way:
Thanks @irbekrm !
Yes.
Hi @irbekrm,
The app was deployed using this repository/instructions. We also have on the same repo the instructions to connect to the web app using port-forward.
Tailscale Funnel works by examining the SNI header of an incoming TLS connection to determine which tailnet it is destined for, and forwards the encrypted TLS bits to that node on that tailnet. Funnel does not possess the HTTPS certificate for your site, the node on your tailnet handles decryption and responding to any requests.
All of which is to say: Funnel cannot make any changes in the request like rewriting the URL path, it does not have the capability to decrypt the stream.
If you follow these steps and still don’t see an FQDN on Ingress’ status, the next thing to do would be to look at the operator logs.
It does not look like the operator picked it up. Did you actually create the Tailscale Kubernetes operator? Also you did not need to create a tailscale sidecar proxy as well.
The steps that you should follow are:
The Tailscale Kubernetes operator will then create a proxy for you and issue certs for that proxy’s tailnet FQDN- you should be able to then access your ap via the ingress hostname that you can see on its status over HTTPS.
Let me check it. Thanks!