kubelogin: Fails to connect to existing server
Describe the bug In some circumstances, kubelogin will not talk to an existing kubelogin server, and then fails because it cannot start a new server on the same port.
$ kubectl version
error: get-token: could not get a token: authentication error: authcode-browser error: authentication error: authorization code flow error: oauth2 error: authorization error: could not start a local server: no available port: could not listen: listen tcp 127.0.0.1:20904: bind: address already in use
To Reproduce I am not entirely sure how to reproduce this. What I have seen is that it works one day and then fails the next. Whether it is because the computer went to sleep in between, or the refresh token expired, or something else, I do not know. I am happy to do further debugging but need guidance.
Kubeconfig excerpt
users:
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-client-id=kubernetes
- --oidc-extra-scope=roles
- --listen-address=localhost:20904
- --oidc-issuer-url=https://keycloak/auth/realms/master
command: kubectl
env: null
Expected behavior I expect kubelogin to use the existing server if it is running and shut it down if it is somehow unusable. Instead, it keeps the server running but does not use it.
Environment
- OS: macOS 10.15.6
- kubelogin version: v1.20.0
- kubectl version: v1.16.6-beta.0 (ships with Docker for Mac)
- OpenID Connect provider: Keycloak
About this issue
- Original URL
- State: open
- Created 4 years ago
- Reactions: 1
- Comments: 16 (6 by maintainers)
I just tried it and the old processes don’t seem to be closed.
I just released https://github.com/int128/kubelogin/releases/tag/v1.20.1 and please try using it.
@int128 No, I am intentionally letting the window open 3 minutes (or the timeout duration value) so the timeout condition is triggered. Then, another tab opens. This happens exactly 5 times.
Sorry for that! I actually never mentioned that I was not logging in!
I have the following logs. But at this point, I think there is a retry somewhere in the Kubernetes client and this has nothing to do with the plugin. Sadly, I don’t see any options to disable them with
kubectl options.@int128 it seems to fix it on my side, or at least remediate to the issue with a weird side effect.
With the CLI, I get a transport error when I close a tab, then I get the
get-tokenerror after 3 minutes and a new tab reopens. For as long as the process is running, a new tab will open every 3 minutes.With Lens, once the
id_tokenexpires, the login page pops up in the browser. If I wait 180 seconds without login, another tab opens with the login page.The good thing is that the process always reuses the same port when it reopens a tab, the bad news is that it stays in that loop instead of exiting completely and freeing the port.
I would expect lens or the CLI to just return an authentication error after the timeout
EDIT: Actually, it retries exactly 5 times…
https://github.com/int128/oauth2cli/pull/51 may fix this issue. I will release the new version of kubelogin later.
Same thing is happening to me. Especially when using https://k8slens.dev/, but it will also cause problems on the CLI. Whenever it prompts you to login and open the browser, if you close it then, the socket (8000) will stay open forever. If you use the CLI, dont cancel the process manually. The second time, it will use the socket 18000.
You can see that the socket is still open with
Finally, you will get the error the third time.
You can workaround the problem by killing these hanging processes before calling
kubectl oidc-login.I believe some sort of timeout that would close the socket and the kubelogin process if it takes too long would solve/mitigate the issue.