rancher: cannot use serviceaccount with token auth against rancher
Rancher versions: rancher/rancher:2.0.6
Infrastructure Stack versions: kubernetes (if applicable): v1.10.3-rancher2-1
Docker version: (docker version
,docker info
preferred)
$ docker info
Containers: 64
Running: 32
Paused: 0
Stopped: 32
Images: 34
Server Version: 17.03.2-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.16.7-1.el7.elrepo.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 94.4 GiB
Name: dockerblade-slot4-oben.ub.intern.example.com
ID: KXFV:3XKT:RY4N:SGZE:ZCNB:57PH:BLWT:H27S:K6OE:OVKA:UJLB:O3JE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Http Proxy: http://proxy.example.com:3128
Https Proxy: http://proxy.example.com:3128
No Proxy: localhost,127.0.0.1,.example.com
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Operating system and kernel: (cat /etc/os-release
, uname -r
preferred)
$ uname -r
4.16.7-1.el7.elrepo.x86_64
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) Bare-metal
Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB) single node rancher
Environment Template: (Cattle/Kubernetes/Swarm/Mesos) Kubernetes
Steps to Reproduce:
- create a serviceaccount and a role/rolebinding:
$ cat | kubectl create -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: testaccount
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: testrole
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: testrolebinding
subjects:
- kind: ServiceAccount
name: testaccount
roleRef:
kind: Role
name: testrole
apiGroup: rbac.authorization.k8s.io
EOF
- get the token of the account
$ kubectl get secret $(kubectl get serviceaccount testaccount -o jsonpath={.secrets[0].name}) -o jsonpath={.data.token} | base64 -d
- use the token to perform cluster-operation
Results:
$ kubectl auth can-i get pods
error: You must be logged in to the server (the server has asked for the client to provide credentials (post selfsubjectaccessreviews.authorization.k8s.io))
Instead, when i use the kube-apiserver directly, it works:
$ kubectl --cluster k8s auth can-i get pods
yes
(the cluster is defined in my .kube/config
)
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 20 (4 by maintainers)
This is more a workaround than a solution. Imagine a scenario where your kube-apiserver is not accessible from where you need to use kubectl (but rancher is).
I just ran into this as well. Currently we only restrict the API server access from the rancher server to make sure it is using rancher auth, etc. We have a few integrations that using jwt and service accounts but I don’t have an easy way to give them access without opening up the API servers and creating a load balancer, etc. It would be awesome if rancher server can have a jwt auth pass through or another auth method and mapping to a clusters service account using jwt
So we use the K8s Auth method for accessing secrets in Hashicorp Vault. This uses the JWT token from the requesting pods configured service account which is authenticated using the token reviewer service in the API server. In this set up it is necessary to send requests directly to the API server (or an external LB sitting atop if you have a HA setup or just have it configured that way to make DNS easier). As @vincent99 suggests this could make for a less secure configuration, however, OTOH, the JWT token is scoped to one or more namespaces and the associated Vault role and policies can mean that the level of access available can be very fine grained indeed (i.e a single secret with read only capability). In general I prefer not to use Rancher’s impersonation since this creates a dependency on the availability of Rancher itself which could impact our ability to manage deployments. Of course that can be mitigated by running HA and so forth so I’m not advocating that anyone else should do that, thats just our choice based on a number of factors.
Sure, you can talk to the Kubernetes API directly, by talking to the cluster directly at the endpoint it exposes. The ask here is for you to be able to make a request still to the Rancher server, but containing a token Rancher has no knowledge of… and then proxy that request (including the token) to the target cluster so that it works (if that is a token the cluster can verify).
But the flip side of this is that removes a layer of protection and introduces direct exposure of all clusters to arbitrary requests from anyone that can reach the server container. Even if the cluster itself is not directly reachable from the outside world at all. Instead of the current behavior of only proxying through requests which have already been been authorized by a Rancher token. This does not seem like a very good tradeoff.
@vincent99
tl;dr - Rancher provisioned k8s clusters DO NOT WORK with k8s cluster + Vault Integration
There is a scenario where this problem is directly affecting integration with the Hashicorp Vault “Kubernetes” auth engine. Which, as I understand it, requires a JWT that was created at some previous point from an existing service account created in the k8s cluster.
When a an application “X” running on a pod needs to login to Vault to get secrets, a Vault login request occurs to the auth/kubernetes auth method on the Vault server. The Vault k8s auth engine then uses a JWT that it’s been previous created from an existing k8s service account, that the engine is authorized to use, for the purposes of authenticating to the k8s cluster API, then performing a token review
...apis/authentication.k8s.io/v1/tokenreviews
of another service account’s JWT token (for application “X”) in the k8s cluster.This is a real-world example of where a JWT auth is needed and is prescribed by Hashicorp Vault documentation. Since API auth seems to only be possible via Rancher-generated API tokens or other Rancher-supported auth methods, the JWT ‘Bearer Token’ provided by the Vault callback to k8s cluster fails because of “401 unauthorized errors.”
JWT service account tokens are not unknown to the underlying k8s cluster, so from a trust perspective, it’s not an issue. The k8s clusters operators are explicitly trusting the Vault cluster on purpose, to auth with that JWT. And a @goffinf has mentioned, is mitigated by limiting to specific namespaces and service-accounts. This should be an operator-decision, not a Rancher-one, IMO.
Okay so the solution is to do the following steps:
Get the service account token:
Get the service account cert:
Configure cluster for kube config:
Note: the IP address 206.189.64.94:6443 is the actual kubernetes API endpoint. You can fetch this from Rancher API,
https://massimo.do.rancher.space/v3/clusters/c-gpn7w
Replace hostname and cluster id, there should be an entry called
"apiEndpoint": “https://206.189.64.94:6443”,
You will need to use that endpoint when using a service account to talk to the kubernetes cluster.
Set credentials for the service account:*
Set the service account context for kube config:
Switch context to test:
Results:
For anyone interested, using the endpoint described by vincent99 exposed here worked for me https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint
Grab the Secret name for your SA
kc get sa -n <namespace> <SA name> -o yaml
Grab the cert from the SA’s secret
kc get secret <Secret Name> -n <namespace> -o jsonpath="{.data['ca\.crt']}" | base64 -D