che: Support for CustomResource `openshift` & `kubernetes` devfile components is not working
Describe the bug
I am attempting to create a devfile which will deploy a Kafka cluster and Kafka topics in the workspace along with the other workspace components.
Following the documentation at https://devfile.io/docs/2.2.0/adding-a-kubernetes-or-openshift-component yields unsuccessful results.
This feature appears to have been enabled by: https://github.com/devfile/devworkspace-operator/pull/961
However, variations on a devfile to implement it have failed.
Che version
7.63@latest
Steps to reproduce
Example Devfile #1:
With this devfile, the workspace silently excludes all of the included components… No errors are obvious.
schemaVersion: 2.2.0
attributes:
controller.devfile.io/storage-type: per-workspace
metadata:
name: che-test-workspace
components:
- name: dev-tools
container:
image: image-registry.openshift-image-registry.svc:5000/eclipse-che-images/quarkus:latest
memoryRequest: 1Gi
memoryLimit: 6Gi
cpuRequest: 500m
cpuLimit: 2000m
mountSources: true
sourceMapping: /projects
args:
- '-f'
- /dev/null
command:
- tail
env:
- name: SHELL
value: "/bin/zsh"
volumeMounts:
- name: m2
path: /home/user/.m2
- name: ubi
container:
args:
- '-f'
- /dev/null
command:
- tail
image: registry.access.redhat.com/ubi9/ubi-minimal
memoryLimit: 64M
mountSources: true
sourceMapping: /projects
- volume:
size: 4Gi
name: projects
- volume:
size: 2Gi
name: m2
- name: kafka-cluster
openshift:
deployByDefault: true
inlined: |
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: che-demo
labels:
app: che-demo
spec:
kafka:
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
inter.broker.protocol.version: '3.4'
version: 3.4.0
storage:
size: 1Gi
deleteClaim: true
type: persistent-claim
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
entityOperator:
topicOperator: {}
userOperator: {}
zookeeper:
storage:
deleteClaim: true
size: 1Gi
type: persistent-claim
replicas: 1
- name: kafka-topic
openshift:
deployByDefault: true
inlined: |
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: che-demo
labels:
strimz.io/cluster: che-demo
spec:
config:
retention.ms: 604800000
segment.bytes: 1073741824
partitions: 10
replicas: 1
topicName: che-demo
commands:
- exec:
commandLine: "cp /home/user/.kube/config /projects/config"
component: dev-tools
group:
kind: run
label: Copy Kubeconfig
workingDir: '/'
id: copy-kubeconfig
Example Devfile #2:
With this devfile, the workspace deploys with the correct container components, but there is no obvious way to run the apply commands. Further, the apply commands cannot be created with the deploy group as that group does not appear to be implemented. Note: you have to remove the group entries from the apply commands for this example to not throw an error.
schemaVersion: 2.2.0
attributes:
controller.devfile.io/storage-type: per-workspace
metadata:
name: che-test-workspace
components:
- name: dev-tools
container:
image: image-registry.openshift-image-registry.svc:5000/eclipse-che-images/quarkus:latest
memoryRequest: 1Gi
memoryLimit: 6Gi
cpuRequest: 500m
cpuLimit: 2000m
mountSources: true
sourceMapping: /projects
args:
- '-f'
- /dev/null
command:
- tail
env:
- name: SHELL
value: "/bin/zsh"
volumeMounts:
- name: m2
path: /home/user/.m2
- name: ubi
container:
args:
- '-f'
- /dev/null
command:
- tail
image: registry.access.redhat.com/ubi9/ubi-minimal
memoryLimit: 64M
mountSources: true
sourceMapping: /projects
- volume:
size: 4Gi
name: projects
- volume:
size: 2Gi
name: m2
- name: kafka-cluster
openshift:
inlined: |
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: che-demo
labels:
app: che-demo
spec:
kafka:
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
inter.broker.protocol.version: '3.4'
version: 3.4.0
storage:
size: 1Gi
deleteClaim: true
type: persistent-claim
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
entityOperator:
topicOperator: {}
userOperator: {}
zookeeper:
storage:
deleteClaim: true
size: 1Gi
type: persistent-claim
replicas: 1
- name: kafka-topic
openshift:
inlined: |
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: che-demo
labels:
strimz.io/cluster: che-demo
spec:
config:
retention.ms: 604800000
segment.bytes: 1073741824
partitions: 10
replicas: 1
topicName: che-demo
commands:
- exec:
commandLine: "cp /home/user/.kube/config /projects/config"
component: dev-tools
group:
kind: run
label: Copy Kubeconfig
workingDir: '/'
id: copy-kubeconfig
- apply:
component: kafka-cluster
group:
kind: deploy
label: deploy-kafka-cluster
id: kafka-cluster
- apply:
component: kafka-topic
group:
kind: deploy
label: kafka-topic
id: kafka-topic
Expected behavior
Workspace deployed with Kafka cluster and Topic
Runtime
OpenShift
Screenshots
No response
Installation method
OperatorHub
Environment
macOS
Eclipse Che Logs
No response
Additional context
The Strimzi Operator is installed with cluster scope.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 17 (10 by maintainers)
I’ve updated the title to more precisely define the issue (custom resources are not supported in devfile components). Currently, the problem is that within the controller, we require the golang specs for custom resource objects in order to apply them and cache them within the reconcile loop.
However, standard Kubernetes objects should be supported. @cgruver let me know if this is accurate.
Yeah, the
v1beta1issue was a red herring, the real problem is that DWO doesn’t know how to transmit Kafka CRs to the API server. We might need additional handling for custom resources, as this is an issue that will impact any CR on the cluster, not just Kafka.I think our hands may be tied within the operator here, at least for the time being. I’ll try to look into it more when I have some time.
The second flow (devfile no. 2) is still something that should be supported via the editor, though.
Note: I’ve updated the clusterrole/clusterrolebinding in the comment above – I had the incorrect API group for the clusterrole.
Tested on OpenShift with a cluster-admin user and a regular user:
As cluster-admin, workspace creation succeeds. However, workspace start ultimately fails, with message
suggesting that DWO cannot manage CRs no matter what we do at the moment (it doesn’t know how to serialize/deserialize them, which makes sense).
As a regular, non-cluster-admin user, I continue to get the original issue, except the 403 forbidden is now for user permissions: