kubernetes: client-go: client/auth/oidc sometimes fails to persist refresh token due to empty filename

What happened?

Periodically we seem to see a long-lived client fail to refresh its oidc access token as it fails to persist it to the KUBECONFIG file with an error of could not persist new tokens: open : no such file or directory, this can only be recovered by restarting the application. The application itself is simply pointing to a single explicit config file:

	kubeconfig, ok := os.LookupEnv(clientcmd.RecommendedConfigPathEnvVar)
	if !ok {
		kubeconfig = clientcmd.RecommendedHomeFile
	}
	loader := &clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfig}

Looking at the error site on L283 of oidc.go:

https://github.com/kubernetes/kubernetes/blob/9af2ece18abc3188aa280cb1f1c35a8a4cb791c3/staging/src/k8s.io/client-go/plugin/pkg/client/auth/oidc/oidc.go#L275-L284

This would suggest that the call to WriteToFile in config.go’s ModifyConfig func inexplicably passed an empty filename for destinationFile, but I can’t see a path through the code that could have resulted in that having occurred.

https://github.com/kubernetes/kubernetes/blob/9af2ece18abc3188aa280cb1f1c35a8a4cb791c3/staging/src/k8s.io/client-go/tools/clientcmd/config.go#L269-L294

Is this something you’ve seen before? Any thoughts as to how we might get into this situation?

What did you expect to happen?

A long-lived client-go application using oidc should be able to indefinitely refresh its access token and keep it alive without issue.

How can we reproduce it (as minimally and precisely as possible)?

Point a long lived client-go application at a KUBECONFIG containing oidc-based authentication and a short-lived accessToken

- name: username@example.com/userID/iam.example.com-identity
  user:
    auth-provider:
      config:
        client-id: clientID
        client-secret: clientSecret
        id-token: accessToken
        idp-issuer-url: https://iam.example.com/identity
        refresh-token: refreshToken
      name: oidc

Anything else we need to know?

Aside: if we fail to persist the refreshed token to disk in the func (p *oidcAuthProvider) idToken() (string, error) call, it is currently treated as a hard failure and the Roundtripper doesn’t update the in-memory tokens and doesn’t continue to make the request. Whilst I can see the benefit of that in terms of making the caller aware of the persistence issue, it does mean that an inability to update the KUBECONFIG on disk causes the client application to become unusable when it could have succeeded in making its roundtrip calls with the refreshed token.

Kubernetes version

k8s.io/api v0.22.1 k8s.io/apimachinery v0.22.1 k8s.io/client-go v0.22.1

Cloud provider

N/A

OS version

N/A

Install tools

N/A

Container runtime (CRI) and and version (if applicable)

N/A

Related plugins (CNI, CSI, …) and versions (if applicable)

N/A

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 15 (8 by maintainers)

Most upvoted comments

@stlaz @s-urbaniak either of y’all care to take a look and see if we can improve anything here?

/remove-lifecycle stale