linkerd2: Automatic proxy injection is not working

Automatic proxy injection is not working at all, proxy sidecar is not injected when deployment is created in the namespace with the proper “linkerd.io/inject: enabled” annotation.

How can it be reproduced?

Configuration was done as described here -> https://linkerd.io/2/tasks/automating-injection Admission regitration is enabled on the cluster, linkerd was installed with --proxy-auto-inject flag. Anntotation “linkerd.io/inject: enabled” was set on both namespace and deployment (also tested multiple times, with different configurations, namespace only, deployment only, etc.) I have tested our apps and also tried helloworld example from linkerd tutorial. No luck in any case.

Logs, error output, etc

Output of: kubectl -n linkerd get deploy/linkerd-proxy-injector svc/linkerd-proxy-injector

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/linkerd-proxy-injector   1/1     1            1           20m

NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/linkerd-proxy-injector   ClusterIP   10.101.230.48   <none>        443/TCP   20m

Proxy injector pod logs:

time="2019-05-24T08:37:12Z" level=info msg="running version stable-2.3.0"
time="2019-05-24T08:37:12Z" level=info msg="deleting existing webhook configuration"
time="2019-05-24T08:37:12Z" level=info msg="created webhook configuration: /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations/linkerd-proxy-injector-webhook-config"
time="2019-05-24T08:37:12Z" level=info msg="waiting for caches to sync"
time="2019-05-24T08:37:12Z" level=info msg="caches synced"
time="2019-05-24T08:37:12Z" level=info msg="starting admin server on :9995"
time="2019-05-24T08:37:12Z" level=info msg="listening at :8443"

Helloword example pod (notice only one containter):

NAME                         READY   STATUS    RESTARTS   AGE
helloworld-fdb7dc65f-7k2wl   1/1     Running   0          9s

Deployment details:

Name:                   helloworld
Namespace:              crow
CreationTimestamp:      Fri, 24 May 2019 09:06:21 +0000
Labels:                 run=helloworld
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               run=helloworld
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=helloworld
  Containers:
   helloworld:
    Image:        buoyantio/helloworld
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   helloworld-fdb7dc65f (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  13m   deployment-controller  Scaled up replica set helloworld-fdb7dc65f to 1

Namespace configuration:

Name:         crow
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"linkerd.io/inject":"enabled"},"name":"crow"}}
              linkerd.io/inject: enabled
Status:       Active

No resource quota.

No resource limits.

Output of: kubectl -n crow get po -l run=helloworld -o jsonpath=‘{.items[0].spec.containers[*].name}’

helloworld

linkerd check output

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
-----------------
√ control plane namespace exists
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ no invalid service profiles

linkerd-version
---------------
× can determine the latest version
    Get https://versioncheck.linkerd.io/version.json?version=stable-2.3.0&uuid=unknown&source=cli: dial tcp: lookup versioncheck.linkerd.io on <IP>:53: no such host
    see https://linkerd.io/checks/#l5d-version-latest for hints
‼ cli is up-to-date
    unsupported version channel: stable-2.3.0
    see https://linkerd.io/checks/#l5d-version-cli for hints

control-plane-version
---------------------
‼ control plane is up-to-date
    unsupported version channel: stable-2.3.0
    see https://linkerd.io/checks/#l5d-version-control for hints
√ control plane and cli versions match

Status check results are ×

Environment

  • Kubernetes Version: 1.14.0
  • Cluster Environment: kubeadm
  • Host OS: CentOS 7.6.1810
  • Linkerd version: 2.3.0

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 3
  • Comments: 15 (6 by maintainers)

Most upvoted comments

Got it, certificate issue.

W0525 12:38:22.461740       1 dispatcher.go:70] Failed calling webhook, failing open linkerd-proxy-injector.linkerd.io: failed calling webhook "linkerd-proxy-injector.linkerd.io": Post https://linkerd-proxy-injector.linkerd.svc:443/?timeout=30s: x509: certificate signed by unknown authority
E0525 12:38:22.461775       1 dispatcher.go:71] failed calling webhook "linkerd-proxy-injector.linkerd.io": Post https://linkerd-proxy-injector.linkerd.svc:443/?timeout=30s: x509: certificate signed by unknown authority
I0525 12:38:22.467141       1 trace.go:81] Trace[759612571]: "Create /api/v1/namespaces/crow/pods" (started: 2019-05-25 12:38:21.867963774 +0000 UTC m=+4299174.359556153) (total time: 599.159502ms):
Trace[759612571]: [593.847043ms] [593.780607ms] About to store object in database

I have solved this issue, it was caused by proxy env vars passed to api server by kubeadm. Because I am behind corporate proxy I have to run kubeadm with proxy exported. Kubeadm is passing this env vars later to kube-apiserver config as :

   env:
    - name: NO_PROXY
      value: 127.0.0.1,10.0.0.0/8,172.16.0.0/12,<custer_range>, .xxx.com
    - name: HTTPS_PROXY
      value: http://user:password@proxy-host:10080
    - name: HTTP_PROXY
      value: http://user:password@proxy-host:10080

I have removed this entries from api server config and it works. I have to figure out, what more should be added to no_proxy env to make it work with proxy. Probably “.svc”