rook: Objectstore: s3cmd make bucket returns error InvalidLocationConstraint
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior: Following installation of rook master branch with objectstore, there is an error when creating a bucket:
s3cmd mb s3://bkt
ERROR: S3 error: 400 (InvalidLocationConstraint)
This is a regressing from release-0.9 branch where the s3 make bucket command succeeds.
Expected behavior: s3cmd is able to create buckets
s3cmd mb s3://bkt
Bucket 's3://bkt/' created
How to reproduce it (minimal and precise):
minishift start --disk-size 40g --memory 8GB --cpus 8 -v99
minishift ssh "sudo mkdir /mnt/sda1/var/lib/rook;sudo ln -s /mnt/sda1/var/lib/rook /var/lib/rook"
oc login -u system:admin
oc create -f ./common.yaml
oc create -f ./operator-openshift.yaml
vim ./cluster.yaml
config:
storeType: filestore
# Cluster level list of directories
directories:
- path: /var/lib/rook
:wq
oc create -f ./cluster.yaml
vim ./object-openshift.yaml
spec:
# The pool spec used to create the metadata pools. Must use replication.
metadataPool:
failureDomain: host
replicated:
size: 1
# The pool spec used to create the data pool. Can use replication or erasure coding.
dataPool:
failureDomain: host
replicated:
size: 1
:wq
oc create -f ./object-openshift.yaml
oc create -f ./object-user.yaml
oc create -f ./toolbox.yaml
oc -n rook-ceph exec -it $(oc -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
[root@minishift /]# yum install -y s3cmd
[root@minishift /]# radosgw-admin user create --display-name="COSBench_user" --uid=cosbench --access-key b2345678901234567890 --secret b234567890123456789012345678901234567890 | jq
[root@minishift /]# s3cmd --configure
[root@minishift /]# s3cmd mb s3://bkt
ERROR: S3 error: 400 (InvalidLocationConstraint)
Environment:
- OS (e.g. from /etc/os-release):
RHEL 7.6 - Kernel (e.g.
uname -a):3.10.0-957.10.1.el7.x86_64 - Cloud provider or hardware configuration:
N/A - Rook version (use
rook versioninside of a Rook Pod):rook: v0.9.0-362.g69936c1
git branch -vv
* master 69936c1 [origin/master] Merge pull request #2676 from colonwq/pr2660
- Kubernetes version (use
kubectl version):
oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://192.168.42.14:8443
kubernetes v1.11.0+d4cacc0
- Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
minishift-1.33.0-linux-amd64.tgz - Storage backend status (e.g. for Ceph use
ceph healthin the [Rook Ceph toolbox] (https://rook.io/docs/Rook/master/toolbox.html)):
[root@minishift /]# ceph -s
cluster:
id: 34a39b63-57b6-463a-9624-a0d1414fbba8
health: HEALTH_OK
services:
mon: 3 daemons, quorum b,c,a
mgr: a(active)
osd: 1 osds: 1 up, 1 in
rgw: 1 daemon active
data:
pools: 6 pools, 600 pgs
objects: 205 objects, 4.2 KiB
usage: 9.7 GiB used, 28 GiB / 38 GiB avail
pgs: 600 active+clean
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 15 (8 by maintainers)
Yep, s3cmd works when specifying location like:
But it should work with the default placement as well, it is configured in the zonegroup/zone so it should not be necessary to specify it explicitly
I ran into this error while following the RadosGW tutorial, I made a PR #3179 to add the
--region :default-placementwhich solved the problem of the bucket creation. I hope it can help others.OK, I think I’ve figured it out. In my case my rgw was throwing:
Default palcement can be chosen by appending
:default-placementto region name (per http://docs.ceph.com/docs/master/radosgw/placement/#s3-bucket-placement).