k3s: k3s kubectl logs pod failed behind proxy

Environmental Info: K3s Version:

Client Version: version.Info{Major:“1”, Minor:“24”, GitVersion:“v1.24.1+k3s1”, GitCommit:“0581808f5c160b0c0cafec5b8f20430835f34f44”, GitTreeState:“clean”, BuildDate:“2022-06-11T17:26:28Z”, GoVersion:“go1.18.1”, Compiler:“gc”, Platform:“linux/amd64”} Kustomize Version: v4.5.4 Server Version: version.Info{Major:“1”, Minor:“24”, GitVersion:“v1.24.1+k3s1”, GitCommit:“0581808f5c160b0c0cafec5b8f20430835f34f44”, GitTreeState:“clean”, BuildDate:“2022-06-11T17:26:28Z”, GoVersion:“go1.18.1”, Compiler:“gc”, Platform:“linux/amd64”}

Node(s) CPU architecture, OS, and Version:

Cluster Configuration:

Describe the bug:

when running k3s behind proxy, kubectl fail to get pod log

cat /root/.bash_profile

#esnet proxy
PROXY_URL="http://10.3.254.254:3128/"
export http_proxy="$PROXY_URL"
export https_proxy="$PROXY_URL"
export no_proxy="127.0.0.1,10.0.0.0/8,10.3.0.0/16,10.169.72.0/24,localhost"

install k3s

root@cilium-demo-1:/home#  curl -sfL https://get.k3s.io | INSTALL_K3S_SYMLINK=force INSTALL_K3S_VERSION='v1.24.1+k3s1' INSTALL_K3S_EXEC='--disable=traefik --disable-network-policy' sh -

[INFO]  Using v1.24.1+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.1+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.1+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

pod running

root@cilium-demo-1:/home# kubectl get po -o wide -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE   IP          NODE            NOMINATED NODE   READINESS GATES
kube-system   local-path-provisioner-7b7dc8d6f5-4g26s   1/1     Running   0          88s   10.42.0.3   cilium-demo-1   <none>           <none>
kube-system   coredns-b96499967-m9qsk                   1/1     Running   0          88s   10.42.0.2   cilium-demo-1   <none>           <none>
kube-system   metrics-server-668d979685-sqfrz           1/1     Running   0          88s   10.42.0.4   cilium-demo-1   <none>           <none>

logs pod

root@cilium-demo-1:/home# kubectl logs coredns-b96499967-m9qsk  -n kube-system

Error from server: Get "https://cilium-demo-1:10250/containerLogs/kube-system/coredns-b96499967-m9qsk/coredns": Service Unavailable

Steps To Reproduce:

Expected behavior: able to log pod behind proxy

Actual behavior: unable to log pod behind proxy

Additional context / logs:

Backporting

  • Needs backporting to older releases

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 2
  • Comments: 18 (7 by maintainers)

Commits related to this issue

Most upvoted comments

Yeah, that’s fair. Previous releases of K3s carried a patch to core Kubernetes that replaced the dialer that the apiserver used to access the kubelet, with a replacement that had proxy support disabled and used remotedialer.

Now that we’ve dropped that patch (and we’re always trying to drop patches), you’re seeing the same behavior as upstream vanilla Kubernetes - which is that the apiserver will try to use the proxy to access the kubelet, if one is configured. I’ll have to see if there’s a better way to handle this, without bring back some or all of that patch.

This behavior is a little unexpected; I’ll have to do some checking to see if there’s any way to improve the default.

@brandond thanks for making k3s a great project, I use k3s daily as test lab for my OSS cilium contribution and the company I work for also use k3s 😃

@brandond Thanks again for the hints, yup, adding the internal CIDRs to no_proxy did solve the issue. I still don’t understand why is that needed now if that wasn’t in the past, I share the same concerns as @max-wittig about a breaking change on a minor release.

Also a couple of extra points:

Hmm, so it’s trying to connect to the kubelet via the configured HTTP proxy, by way of the apiserver egress. All the egress sees is the request to connect to the proxy. That is interesting.

Can you try with --kube-apiserver-arg=kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname ?