rook: ObjectBucketClaim: Subsequent credentials do not have access to bucket
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
Any ObjectBucketClaims created after the initial claim do not have access to the bucket.
Expected behavior:
All ObjectBucketClaims should have appropriate access.
How to reproduce it (minimal and precise):
Create two ObjectBucketClaims
{
"metadata": {
"name": "first",
"namespace": "test",
},
"apiVersion": "objectbucket.io/v1alpha1",
"spec": {
"bucketName": "thanos",
"storageClassName": "rook-ceph-replica-retain-bucket"
},
"kind": "ObjectBucketClaim"
}
{
"metadata": {
"name": "second",
"namespace": "test",
},
"apiVersion": "objectbucket.io/v1alpha1",
"spec": {
"bucketName": "thanos",
"storageClassName": "rook-ceph-replica-retain-bucket"
},
"kind": "ObjectBucketClaim"
}
The first set of credentials work OK.
Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
...
root@test:/# s3cmd ls
2021-06-02 14:21 s3://thanos
root@test:/# s3cmd ls s3://thanos
root@test:/#
The second set?
Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
...
root@test:/# s3cmd ls
root@test:/# s3cmd ls s3://thanos
ERROR: Access to bucket 'thanos' was denied
ERROR: S3 error: 403 (AccessDenied)
File(s) to submit:
- Cluster CR (custom resource), typically called
cluster.yaml, if necessary See https://github.com/uhthomas/automata/tree/2fb5e27d750e4d91788aebde4e4d55ca90443f3f/k8s/pillowtalk/rook_ceph for the full rook_ceph config.
For the ObjectBucketClaims:
- Operator’s logs, if necessary
debug 2021-06-02T15:04:03.545+0000 7f6f10cf6700 1 ====== starting new request req=0x7f6fb1e5f6b0 =====
debug 2021-06-02T15:04:03.545+0000 7f6f10cf6700 1 op->ERRORHANDLER: err_no=-13 new_err_no=-13
--
debug 2021-06-02T15:04:03.545+0000 7f6f10cf6700 1 ====== starting new request req=0x7f6fb1e5f6b0 =====
debug 2021-06-02T15:04:03.545+0000 7f6f10cf6700 1 op->ERRORHANDLER: err_no=-13 new_err_no=-13
debug 2021-06-02T15:04:03.545+0000 7f6ef84c5700 1 ====== req done req=0x7f6fb1e5f6b0 op status=0 http_status=403 latency=0s ======
debug 2021-06-02T15:04:03.545+0000 7f6ef84c5700 1 beast: 0x7f6fb1e5f6b0: 172.16.196.106 - - [2021-06-02T15:04:03.545931+0000] "GET /thanos/?delimiter=%2F&encoding-type=url&fetch-owner=true&list-type=2&prefix= HTTP/1.1" 403 228 - "MinIO (linux; amd64) minio-go/v7.0.2 thanos-compact/0.18.0 (go1.15.7)" -
- Crashing pod(s) logs, if necessary
Environment:
- OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
- Kernel (e.g.
uname -a):
Linux 13a0a37008 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- Cloud provider or hardware configuration: 2xE5-2650 v2, 48GB
- Rook version (use
rook versioninside of a Rook Pod):
rook: v1.6.3
go: go1.16.3
- Storage backend version (e.g. for ceph do
ceph -v):
ceph version 16.2.2 (e8f22dde28889481f4dda2beb8a07788204821d3) pacific (stable)
- Kubernetes version (use
kubectl version): v1.21.1 - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Custom (kubeadm)
- Storage backend status (e.g. for Ceph use
ceph healthin the Rook Ceph toolbox):HEALTH_OK
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 23 (17 by maintainers)
Commits related to this issue
- pillowtalk/thanos: Attempt to fix bucket https://github.com/rook/rook/issues/8034#issuecomment-853975085 — committed to uhthomas/automata by uhthomas 3 years ago
@Ericra95 : please don’t confuse
usercaps with s3 protocol. AFAIR these caps used to send adminops request to RGW or more like rest interface forradosgw-admincommands. IMO the right to provide access for bucket in s3 protocol will via bucket policiesSo, I deleted all the OBCs and the StorageClass. I had to manually go into Ceph and delete any existing buckets also.
I reapplied the manifests and it seems okay now.
Thank you!
Can we leave this ticket open for the aforementioned reasons?
Thank you, I’ll give it a try later.
It seems odd that I need two storage classes just for one bucket.
I guess there are two takeaways from this thread: