tailscale: tailscale-operator not working on AKS

What is the issue?

I want to expose my AKS kubernetes cluster via Tailscale, and tried to do that via the Tailscale proxy, however, that doesn’t seems to proxy the traffic to the destination IP. As suggested here, I tried out the Tailscale Operator, hoping that that might work, however, I’m running into the same issue here.

To troubleshoot the issue, I’ve ramped up a secondary cluster, using microk8s, and I’ve got the Tailscale Operator working there, as you can see in the logs of the proxy pod:

Proxy pod on AKS

$ kubectl logs -n tailscale pod/ts-sample-workload-one-bjvhs-0
...
2023/05/10 08:27:55 Accept: TCP{100.115.48.77:64882 > 100.79.209.22:80} 52 tcp ok
2023/05/10 08:27:56 Accept: TCP{100.115.48.77:64882 > 100.79.209.22:80} 52 tcp ok

Proxy pod on microk8s

$ kubectl logs -n tailscale pod/ts-sample-workload-one-7nst7-0
...
2023/05/10 08:27:40 Accept: TCP{100.115.48.77:64880 > 100.67.186.56:80} 52 tcp ok
2023/05/10 08:27:40 Accept: TCP{100.67.186.56:80 > 100.115.48.77:64880} 52 ok out
2023/05/10 08:27:40 Accept: TCP{100.115.48.77:64880 > 100.67.186.56:80} 40 tcp non-syn

This is what I get when I curl the Tailscale IP of the sample workload (which is this Deployment, and Service that I’ve used for that). When I hit the AKS cluster, nothing happens, when I hit the microk8s cluster, I get the expected output.

I’ve looked at the source of the Tailscale proxy image (which turns out to also be used under the hood of the Tailscale Operator), I see that it basically only sets up some iptables rules. So this is something you should also see in the proxy containers on both clusters. I don’t have any experience with iptables, and I don’t really know how to troubleshoot this, but I’ve found a couple commands to inspect things. I see that the AKS one has some rules installed, but also missing quite a few.

Is this a bug in the Tailscale proxy image? Or should I report this to AKS?

iptables output on AKS

/ # iptables -nvL
Chain INPUT (policy ACCEPT 682 packets, 287K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 605 packets, 173K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain ts-forward (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain ts-input (0 references)
 pkts bytes target     prot opt in     out     source               destination
/ # iptables -vL
Chain INPUT (policy ACCEPT 682 packets, 287K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 605 packets, 173K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain ts-forward (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain ts-input (0 references)
 pkts bytes target     prot opt in     out     source               destination
/ # iptables -w -t nat -L "PREROUTING"
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DNAT       all  --  anywhere             100.79.209.22        to:10.0.45.41

iptables output on microk8s

/ # iptables -nvL
Chain INPUT (policy ACCEPT 100 packets, 25080 bytes)
 pkts bytes target     prot opt in     out     source               destination
  100 25080 ts-input   all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ts-forward  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 93 packets, 18922 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain ts-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MARK       all  --  tailscale0 *       0.0.0.0/0            0.0.0.0/0            MARK xset 0x40000/0xff0000
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x40000/0xff0000
    0     0 DROP       all  --  *      tailscale0  100.64.0.0/10        0.0.0.0/0
    0     0 ACCEPT     all  --  *      tailscale0  0.0.0.0/0            0.0.0.0/0

Chain ts-input (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  lo     *       100.67.186.56        0.0.0.0/0
    0     0 RETURN     all  --  !tailscale0 *       100.115.92.0/23      0.0.0.0/0
    0     0 DROP       all  --  !tailscale0 *       100.64.0.0/10        0.0.0.0/0
/ # iptables -w -t nat -L "PREROUTING"
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DNAT       all  --  anywhere             default-sample-workload-one-1.shark-egret.ts.net  to:10.152.183.137

Steps to reproduce

  1. Deploy an AKS cluster:
  2. Deploy the Tailscale Operator
  3. Deploy some sample workload that has the loadBalancerClass set to tailscale, like this one:
    • kubectl apply -f https://gist.githubusercontent.com/tiesmaster/d7b397f19015514451fd0cd58b37fb06/raw/ed83bca958ee43ab107f2468a849918d4d0da87f/sample-workload-one.yaml
  4. Hit the sample workload endpoint with curl

Are there any recent changes that introduced the issue?

No response

OS

Other

OS version

AKS with Kubernetes version 1.26.3

Tailscale version

No response

Other software

No response

Bug report

No response

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 15 (8 by maintainers)

Commits related to this issue

Most upvoted comments

@tiesmaster @wadhah101 wadhah101 Major update posted by @KevinLiang10 here: https://github.com/tailscale/tailscale/issues/391#issuecomment-1642656929

It may or may not help with your issue, but is worth testing

@rodrigc Hi I’m working on detecting wether nftables/iptables available or used on machine, and just use it. Tho the implementation is not targeting to solve k8s problems, rather an implementation to relief user from having to explicit set env var to use the new nftables feature.

spit out a log informing the user that they should set this particular variable to get things working

Thanks for this advice we will discuss about adding it to our run time logging. It’s because we are detecting nftables/iptables support at runtime.

I’ll link these k8s issues when I put the pr up and test if the solution would help on this issue!

@tiesmaster Thanks for your response, I lack the skills to debug this unfortunately. Thanks for keeping everyone posted 🙏

@rodrigc Awesome! Thanks for the links. I’m gonna do some reading up on those, and report back here with my findings