noobaa-core: Warp IO workload errors out with "The specified key does not exist" on Bare Metal
Environment info
- NooBaa Version:
[root@hpo-app11 ~]# noobaa version
INFO[0000] CLI version: 5.10.1
INFO[0000] noobaa-image: noobaa/noobaa-core:5.10.1
INFO[0000] operator-image: noobaa/noobaa-operator:5.10.1
[root@hpo-app11 ~]#
- Platform:
[root@hpo-app11 ~]# oc version
Client Version: 4.11.0
Kustomize Version: v4.5.4
Server Version: 4.11.0
Kubernetes Version: v1.24.0+9546431
[root@hpo-app11 ~]#
ODF Version
[root@hpo-app11 ~]# oc get csv
NAME DISPLAY VERSION REPLACES PHASE
mcg-operator.v4.11.1 NooBaa Operator 4.11.1 mcg-operator.v4.11.0 Succeeded
metallb-operator.4.11.0-202208300306 MetalLB Operator 4.11.0-202208300306 Succeeded
ocs-operator.v4.11.1 OpenShift Container Storage 4.11.1 ocs-operator.v4.11.0 Succeeded
odf-csi-addons-operator.v4.11.1 CSI Addons 4.11.1 odf-csi-addons-operator.v4.11.0 Succeeded
odf-operator.v4.11.1 OpenShift Data Foundation 4.11.1 odf-operator.v4.11.0 Succeeded
[root@hpo-app11 ~]#
Warp version:
[root@hpo-app11 ~]# warp --version
warp version 0.5.5 - 1baadbc
[root@hpo-app11 ~]#
Actual behavior
warp run on Bare Metal fails on all three metallb Endpoints
Expected behavior
warp should execute successfully
Steps to reproduce
- Setup BM with CNSA/CSI RC4, ODF 4.11.1, latest DAS operator.
- Run warp on all three metallb IPs with 3 different users:
warp mixed --host=10.49.0.109 --access-key=uIBpl5fBWanFrXxLY6JW --secret-key=Z658YPATWG491XEbnn9ZwCPUGUS8KS36m91zWhUO --obj.size=1K --duration=120m --bucket=bucket11 --debug --insecure --tls 1> capture1.txt 2> error1.txt
warp mixed --host=10.49.0.110 --access-key=lDW9JSyitHuYa0EGx6vM --secret-key=BAu2KPpbAct28rvYwCdjILkO1vNWphsO57H5htlD --obj.size=1K --duration=120m --bucket=bucket22 --debug --insecure --tls 1> capture2.txt 2> error2.txt
warp mixed --host=10.49.0.111 --access-key=8FnXjkMLesfOOOr4TKw6 --secret-key=OjcPkdOyTE6FDujcmBj3JeUVl11TpZznyj4GyVhe --obj.size=1M --duration=120m --bucket=bucket33 --debug --insecure --tls 1> capture3.txt 2> error3.txt
Attachments: capture2.txt capture3.txt capture1.txt error1.txt error2.txt error3.txt
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 32 (10 by maintainers)
We had ran warp run with different obj sizes consecutively on 3 IP’s for different durations on odf 4.11.4 and did not observe this issue. Warp run was run for 2hrs, 4hrs, 6hrs, and 8hrs and it completed without any issues. Below is the odf version that we had installed on BM
ODF version
So, We can close this issue.
@romayalon it is just an update for what was tried yesterday as per discussion in the call. we will run with other object sizes as well to make sure that the bug is fixed. I will definately share the logs in the final update.