rook: Dashboard - cannot create bucket - InvalidLocationConstraint

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior: When creating bucket with s3cmd or mc client, bucket is created. But when creating bucket in dashboard, it fails with code 400 and message “InvalidLocationConstraint”

{"detail": "RGW REST API failed request with status code 400\n(b'{\"Code\":\"InvalidLocationConstraint\",\"Message\":\"The specified location-constr'\n b'aint is not valid\",\"BucketName\":\"tst01\",\"RequestId\":\"tx00000000000000010cda3'\n b'-005f4fa675-30219-s3\",\"HostId\":\"30219-s3-s3\"}')", "component": "rgw"}

image

Expected behavior: Bucket is created via dasboard.

How to reproduce it (minimal and precise): Standard deployment of cluster and object storage.

File(s) to submit: RGW Logs `

debug 2020-09-02T14:17:18.096+0000 7f5bac0a0700 1 ====== starting new request req=0x7f5b0013c8a0 ===== debug 2020-09-02T14:17:18.097+0000 7f5bac0a0700 1 ====== req done req=0x7f5b0013c8a0 op status=0 http_status=200 latency=0.001000111s ====== debug 2020-09-02T14:17:18.360+0000 7f5bb20ac700 1 ====== starting new request req=0x7f5afffb98a0 ===== debug 2020-09-02T14:17:18.388+0000 7f5bb20ac700 1 ====== req done req=0x7f5afffb98a0 op status=0 http_status=200 latency=0.028003110s ====== debug 2020-09-02T14:17:20.293+0000 7f5b3f7c7700 1 ====== starting new request req=0x7f5affcb38a0 ===== debug 2020-09-02T14:17:20.295+0000 7f5b3f7c7700 0 req 1102320 0.002000222s s3:create_bucket location constraint (default) can’t be found. debug 2020-09-02T14:17:20.295+0000 7f5b3f7c7700 1 ====== req done req=0x7f5affcb38a0 op status=-2208 http_status=400 latency=0.002000222s ====== debug 2020-09-02T14:17:18.893+0000 7f7baa5e8700 1 ====== starting new request req=0x7f7ab6df58a0 ===== debug 2020-09-02T14:17:18.893+0000 7f7baa5e8700 1 ====== req done req=0x7f7ab6df58a0 op status=0 http_status=200 latency=0s ====== debug 2020-09-02T14:17:22.870+0000 7f5c311aa700 0 WARNING: RGWRados::log_usage(): user name empty (bucket=), skipping debug 2020-09-02T14:17:22.870+0000 7f5c311aa700 0 WARNING: RGWRados::log_usage(): user name empty (bucket=tst01), skipping debug 2020-09-02T14:17:23.563+0000 7f5b60809700 1 ====== starting new request req=0x7f5b0003a8a0 ===== debug 2020-09-02T14:17:23.593+0000 7f5b60809700 1 ====== req done req=0x7f5b0003a8a0 op status=0 http_status=200 latency=0.029003221s ====== debug 2020-09-02T14:17:20.552+0000 7f7b38d05700 1 ====== starting new request req=0x7f7ab65e58a0 ===== debug 2020-09-02T14:17:20.567+0000 7f7b38d05700 1 ====== req done req=0x7f7ab65e58a0 op status=0 http_status=200 latency=0.015001249s ====== debug 2020-09-02T14:17:20.648+0000 7f7be7e63700 0 WARNING: RGWRados::log_usage(): user name empty (bucket=), skipping debug 2020-09-02T14:17:28.096+0000 7f5b67817700 1 ====== starting new request req=0x7f5b0013c8a0 ===== debug 2020-09-02T14:17:28.097+0000 7f5b67817700 1 ====== req done req=0x7f5b0013c8a0 op status=0 http_status=200 latency=0.001000111s ====== debug 2020-09-02T14:17:28.360+0000 7f5b7b03e700 1 ====== starting new request req=0x7f5afffb98a0 ===== debug 2020-09-02T14:17:28.394+0000 7f5b7b03e700 1 ====== req done req=0x7f5afffb98a0 op status=0 http_status=200 latency=0.034003776s ====== debug 2020-09-02T14:17:28.892+0000 7f7ba35da700 1 ====== starting new request req=0x7f7ab6df58a0 ===== debug 2020-09-02T14:17:28.892+0000 7f7ba35da700 1 ====== req done req=0x7f7ab6df58a0 op status=0 http_status=200 latency=0s ====== debug 2020-09-02T14:17:33.565+0000 7f5bdc100700 1 ====== starting new request req=0x7f5b0003a8a0 ===== debug 2020-09-02T14:17:33.600+0000 7f5bdc100700 1 ====== req done req=0x7f5b0003a8a0 op status=0 http_status=200 latency=0.035003887s ====== debug 2020-09-02T14:17:30.551+0000 7f7b7f592700 1 ====== starting new request req=0x7f7ab65e58a0 ===== debug 2020-09-02T14:17:30.570+0000 7f7b9cdcd700 1 ====== req done req=0x7f7ab65e58a0 op status=0 http_status=200 latency=0.019001582s ====== `

Environment:

  • OS (e.g. from /etc/os-release): centos7
  • Kernel (e.g. uname -a): Linux k8s-ra01v 4.18.0-193.6.3.el8_2.x86_64
  • Cloud provider or hardware configuration: local vSphere Cluster created with Rancher
  • Rook version (use rook version inside of a Rook Pod): v.1.4.1
  • Storage backend version (e.g. for ceph do ceph -v): 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)
  • Kubernetes version (use kubectl version): 1.18.6
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Rancher
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox): HEALTH_OK

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 16 (9 by maintainers)

Most upvoted comments

I have had the same issue too in version 15.2.10. Problem is when you follow the Ceph manual for creating multizone setup there is a point to rename zonegroup named: “default” to something else e.g. us-east. In this case, is renamed the zonegroup name but not the api_name. Solution is following:

  1. radosgw-admin zonegroup get --rgw-zonegroup=us
  2. check the JSON output if “name” and “api_name” matched
  3. If not then copy the json to file e.g. zonegroup_us.json and edit the field “api_name”
  4. import the json: radosgw-admin zonegroup set --infile ./zonegroup_us.json
  5. check it again: radosgw-admin zonegroup get --rgw-zonegroup=us
  6. update period: radosgw-admin period update --commit (you can use --rgw-realm=<your_realm> if you have it more)
  7. restart radosgw containers: ceph orch restart rgw.<realm>.<zone> And it works.

@thotz @peter-poki This fix is to be backported to Octopus release: https://github.com/ceph/ceph/pull/37449

@alimaredia Do you have any insight on why the issue described below is happening? https://tracker.ceph.com/issues/47676

@alfonsomthd anyidea why update period is not working this scenario?

I’ll try to reproduce the issue ASAP and let you know.

by the way, this cmd s3cmd mb --no-ssl --host=${AWS_HOST} --region=":default-placement" --host-bucket= s3://rookbucket works for me

@travisn @alfonsomthd , If I understand correctly in the current workflow zone and zonegroups are created before setting the dashboard options for rgw and period was not updated post setting those options. AFAIK rgw daemon is not restarted the after updating the period