rancher: Cannot assign non-RKE1 Clusters to a Fleet Workspace

Rancher Server Setup

  • Rancher version: 2-6-cd3c9d7
  • Installation option (Docker install/Helm Chart): docker

User Information

  • Admin

Describe the bug Changes to the property fleetWorkspaceName of a v3/cluster are immediately reverted

To Reproduce Via the Dashboard

  • Import a cluster (i used a local k3d)
  • Navigate to Continuous Delivery
  • Create a new Workspace in Advanced / Workspaces
  • Go to Clusters, click on the three dot menu for the downstream cluster and use the Assign To action to change the workspace to the new cluster
  • Change the workspace selector in the header to the new workspace

Via the API UI

  • Follow above to setup up to and including creating a workspace
  • Open <rancher url)/v3/clusters/<cluster id> in a browser
  • Edit the cluster and set fleetWorkspaceName to the new workspace id (note - this will be the same as the name)

Result Via the Dashboard

  • The Clusters list is empty

Via the API UI

  • The response to the edit contains the correct value
  • Nav to <rancher url)/v3/clusters/<cluster id> and fleetWorkspaceName is not set to the new one

Expected Result In the Dashboard

  • The Clusters list shows the cluster

Via the API UI

  • The fleetWorkspaceName value is retained

Additional context

  • This feature passed QA in 2.6.3 after a UI bug was fixed - https://github.com/rancher/dashboard/issues/4799.
  • During that time they noted some patchyness. On a docker install i see this all the time.
  • There’s a discussion in our private slack that covers the above, contact me for details
    • In the thread @prachidamle confirmed that the property looked like it was the right one to set

Update - Assigning RKE1 clusters to a Fleet Workspace works fine

SURE-4522

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 8
  • Comments: 26 (16 by maintainers)

Most upvoted comments

Hello,

Also hitting this issue constantly.

Rancher version: 2.7.1 (but also encountered it in previous 2.6.X versions) Clusters: Multiple k3s klusters on 1.24 (but also encountered it in previous k3s versions)

Reproduce steps:

  • Have a Rancher instance on k3s.
  • Bootstrap a fresh k3s cluster, for ex k3s via Rancher Desktop
  • Import cluster into Rancher with all default values, wait for it to join and become active.
  • Attempt to move it from workspace fleet-default to a custom one, nothing happens.

Hi,

I have the same issue with clusters created on a Harvester cluster using Harvester Node Driver, either K3S or RKE2.

Assigning a non-default Fleet Workspace to the clusters does not apply, either through the UI or editing directly the fleetWorspaceName spec.

Rancher Server Setup

  • Rancher version: v2.6.3
  • Installation option (Docker install/Helm Chart): Helm Chart

Rancher setup was running v2.6.1 before update to v2.6.3, with several clusters (imported k3s) managed using a custom fleet workspace. After upgrade, it is still possible to change fleet workspace assignment on those clusters with no issue.

I then created several clusters using Harvester integration: k3s and rke2. By default they are assigned to fleet-default and it’s not possible to change their workspace, that remains fleet-default.

I can provide any information needed.

Ben.

tested on 2.8-head (2e6895d): (enable the new flag provisioningv2-fleet-workspace-back-population to enable this experimental feature) create a workspace in fleet add a gitRepo to the new workspace add a gitRepo to the fleet-default workspace in each of the following, observe that the fleet-default gitRepo resources are removed, and the new workspace’s gitRepo resources are added

  • transfer node driver rke1 cluster (should not be affected, existing behavior) – pass
  • transfer node driver rke2 cluster to new workspace – pass
  • transfer node driver k3s cluster to new workspace – pass
  • transfer custom rke2 cluster to new workspace – pass
  • transfer custom k3s cluster to new workspace – pass

transfer all clusters back to original workspace - observe that the fleet-default gitRepo resources are added, and the new workspace’s gitRepo resources are removed – pass

disable the flag with rke2/k3s clusters added to non-fleet-default workspace – All rke2/k3s clusters are removed from non fleet-default workspaces this doesn’t sound ideal but since this is an experimental feature with no definition of what should happen if this is disabled, I will consult with the team if this behavior is acceptable for an initial release.

this was tested on both rke1 non-hardened local cluster and rke2 hardened local cluster.

@slickwarren we’re waiting on confirmation the feature flag exists before merging the dashboard issue. we couldn’t see it in -head and are waiting on @snasovich / @Oats87

@aiyengar2 to summarize the solution we’ve discussed (feel free to remove yourself from assignees once you do that) Meanwhile, this is moved out of the 2.6.6 milestone at least for now.