rancher: AD authenticated user loses permissions and is duplicated when the distinguishedName changes

What kind of request is this (question/bug/enhancement/feature request):

bug

Steps to reproduce (least amount of steps as possible):

On a rancher HA installation:

  1. Enable and configure AD authentication, the sAMAccountName is the login attribute.
  2. Have a user log in, so the entry is added to the list of users (with an id like u-xxxxxxxx).
  3. Create a project and add the user as a member.
  4. In the AD modify the distinguishedName of the user. In our case, the user had a dn that read something like CN=Surname\, Name,OU=New Users,OU=Users,OU=Office,... and was later changed to CN=Surname\, Name,OU=Users,OU=Office,...
  5. Have the user log in again.

Result:

The user no longer sees his project in the dropdown list of projects. One can see a new user created in the list of users in rancher’s web UI (this time with a new id, u-yyyyyyyy). In the list of project members one sees the initial user (with id u-xxxxxxxx), but with an error message

Unable to fetch user info activedirectory_user://CN=Surname, Name,OU=New Users,OU=Users,OU=Office,…

In the list of global users one sees two entries side by side:

Unable to fetch user info u-xxxxxxxx activedirectory_user://CN=Surname, Name,OU=New Users,OU=Users,OU=Office,…

and

Surname, Name u-yyyyyyyy User

Other details that may be helpful:

I’m quite certain that this is because the entire distinguishedName is used by rancher’s API as the userPrincipalID and this is, IMHO, a very unfortunate choice. If anything, it is a most unstable attribute.

Environment information

  • Rancher version: v2.2.3 (but AD auth was configured before we upgraded from v2.1.0)
  • Installation option (single install/HA): HA with RKE as detailed in the docs.

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Imported (the cluster created by RKE with Rancher installed via the Helm chart).
  • Machine type (cloud/VM/metal) and specifications (CPU/memory): 5 metal nodes, 40 cores/256 GB RAM each
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version (use docker version):
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:35:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 01:59:36 2019
  OS/Arch:          linux/amd64
  Experimental:     false

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 30 (13 by maintainers)

Most upvoted comments

For me that is a security issue as the users are based on the DN which is not uniq.