kubernetes: User "system:anonymous" cannot proxy services in the namespace "default".

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): User “system:anonymous” cannot proxy services in the namespace “default”.

Kubernetes version (use kubectl version):

~kubectl version --short
Client Version: v1.6.0-alpha.0.2996+add3a08a6d3648
Server Version: v1.6.0-alpha.0.2996+add3a08a6d3648

Environment:

  • Cloud provider or hardware configuration: GCE

What happened: Unable to access services via the proxy endpoint.

What you expected to happen: The service to be accessible via /api/v1/proxy/…

How to reproduce it (as minimally and precisely as possible):

  • Check out sources from HEAD
  • make quick-release
  • cluster/kube-up.sh

Cannot access any service via the proxy (see kubectl cluster-info for some examples), for example:

~ curl -k https://<server>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
User "system:anonymous" cannot proxy services in the namespace "kube-system".

Anything else do we need to know: This is a regression from 1.5.1.

/cc @kubernetes/sig-auth-misc @kubernetes/sig-network-misc

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 11
  • Comments: 39 (20 by maintainers)

Most upvoted comments

I’ve made similar arguments. I completely, totally, 100% believe that moving from “single use mode” to “multi-user” clusters is the right thing to do, and we should do it ASAP (last year, if possible). It is going to hurt a LOT of people, and we need to respect that and allow users to opt-out for quite a while. We also need REALLY good docs, and error messages that can be googled along with SEO’ed solutions to those error messages.

Writing code is easy, rolling it out is hard.

@mml

On Wed, Jan 11, 2017 at 10:26 AM, CJ Cullen notifications@github.com wrote:

I think we need to dig deeper into our 1.5->1.6 permissions story.

I agree that the vast permissions granted by ABAC are terrible and we should do whatever we can to push 1.6 users to set up sane permissions w/ RBAC.

But… We shouldn’t break existing functionality on an upgrade to 1.6 (even if that functionality is something awful like “I need some random service account to be able to exec into system pods”). This could either be a bootstrap one-shot on upgrade to mimic ABAC permissions w/ RBAC bindings (hard, not always possible) or a flag that allows ABAC to stay on (easy, but kinda disappointing).

We also should (maybe) provide an option to create new clusters with the pre-1.6 permission model. Many people have workflows of “Create a cluster at whatever version is released, do a bunch of stuff, tear the cluster down.” It’d be nice to give them a nice path forward. That might be “figure out the permissions you actually need and add the bindings in your CI script”, or it could just be “set this hacky environment variable to keep ABAC until you figure out the right way.”

It looks like https://github.com/kubernetes/kubernetes/pull/39537/files may be a step towards providing a smoother transition, so I’m guessing @deads2k https://github.com/deads2k and @liggitt https://github.com/liggitt and others have probably put some thought into this already and I’m just catching up.

— You are receiving this because you are on a team that was mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/39722#issuecomment-271951973, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVAWvMqJS7HsFKoLh0beVZqI23xfNks5rRR7EgaJpZM4LgUF9 .

The user experience is still weird here IMO. I booted a cluster via ./kube-up.sh and it presents me with those grafana, heapster urls and there is little or no documentation about how I can access them.

Just clicking those urls presents me with - “User “system:anonymous” cannot proxy services in the namespace “default”.” error.

If kubernetes knows that the grafana, heapster urls that it is printing on terminal at the end of kube-up.sh command are inaccessible, it should at very minimum point me to instructions about how to open them.

I think, we should keep this bug open.

From a UX perspective, I think instead of printing those URLs, we should be recommending that kubectl proxy be run and then those endpoints be accessed through the local proxy.

@liggitt @thockin I have the latest 1.6.4 Kubernetes installed on my GCP cluster but cannot figure out how to expose it on an IP Address. How do I make it so that I don’t need shell access and instead can access kubernetes in the browser.

User “system:anonymous” cannot get at the cluster scope.: “Unknown user "system:anonymous"”

I think we need to dig deeper into our 1.5->1.6 permissions story.

I agree that the vast permissions granted by ABAC are terrible and we should do whatever we can to push 1.6 users to set up sane permissions w/ RBAC.

But… We shouldn’t break existing functionality on an upgrade to 1.6 (even if that functionality is something awful like “I need some random service account to be able to exec into system pods”). This could either be a bootstrap one-shot on upgrade to mimic ABAC permissions w/ RBAC bindings (hard, not always possible) or a flag that allows ABAC to stay on (easy, but kinda disappointing).

We also should (maybe) provide an option to create new clusters with the pre-1.6 permission model. Many people have workflows of “Create a cluster at whatever version is released, do a bunch of stuff, tear the cluster down.” It’d be nice to give them a nice path forward. That might be “figure out the permissions you actually need and add the bindings in your CI script”, or it could just be “set this hacky environment variable to keep ABAC until you figure out the right way.”

It looks like https://github.com/kubernetes/kubernetes/pull/39537/files may be a step towards providing a smoother transition, so I’m guessing @deads2k and @liggitt and others have probably put some thought into this already and I’m just catching up.

Dashboard documentation now directs users to use kubectl proxy and access the dashboard at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

https://github.com/kubernetes/dashboard/#documentation also links to the various ways to expose the dashboard at https://github.com/kubernetes/dashboard/wiki/Accessing-dashboard

/close

@drgomesp the top of the dashboard setup instructions links to https://github.com/kubernetes/dashboard/wiki/Access-control which explains how to set up access control for the dashboard and also explains a bit more why things don’t have access out of the box since 1.7. You can follow the instructions at the bottom to create a ClusterRoleBinding for the dashboard’s service account (heed the warnings too though).

@liggitt yes absolutely, everything works fine except from the dashboard. 😕

Accessing http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/workload?namespace=default gives me:

Forbidden (403)

User “system:serviceaccount:kube-system:default” cannot list replicationcontrollers in the namespace “default”.: “Unknown user “system:serviceaccount:kube-system:default”” (get replicationcontrollers)

That isn’t actually the same result… that message is coming from the dashboard (which you successfully accessed) attempting to make an API call. As of https://github.com/kubernetes/kubernetes/pull/46750, GKE deployments do not give RBAC permissions to the default service account.

Accessing other URLs printed out by kubectl proxy, such as the one for Heapster (after replacing the master IP with http://localhost:8081) return a 404 Not Found for http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/.

If you got a 404, then the issue wasn’t authentication (which returns a 401) or authorization (which returns a 403)… the item being accessed just didn’t exist.

I would have expected that the authentication info in my active context for kubectl was used when accessing anything via kubectl proxy.

You are correct, that is what is happening.

@liggitt i’m facing the same problems but i think you can’t change the api-server parameters (anonymous-auth=false) if you run your cluster on GCE (because google is managing the master node for you).

as i wrote in another issue: https://github.com/kubernetes/dashboard/issues/1728#issuecomment-292953096

That means i am not able to login to my dashboard. Since k8s v1.6, anonymous authentication is enabled by default. After enabling, the apiserver thinks we didn’t provide any login information so we should be anonymous (which is wrong; instead he should provide us a login mask). I tried to work around with my on premise cluster (where i have access to the apiserver manifest) and started it with the flag --anonymous-auth=false. Then i’m able to authenticate via basic auth (dashboard provides me a login mask) but after 1:20 minutes the apiserver restarts, because healthz-checks are failing (because healthz-checks are done via anonymous-requests) … So this is no solution.

So starting the apiserver with the --anonymous-auth=false flag doesn’t solve this problem