rclone: regression between v1.52.3 and v1.53.1: rclone tries to create an existing S3 bucket
What is the problem you are having with rclone?
Copying a single file into an existing empty S3 bucket fails.
Looking at debug logs, it seems that v1.53.1 is trying to create an already existing S3 bucket. This operation is failing with AccessDenied because IAM policies doesn’t allow this.
What is your rclone version (output from rclone version)
# ~/rclone/rclone-v1.53.1-linux-amd64/rclone version
rclone v1.53.1
- os/arch: linux/amd64
- go version: go1.15
Which OS you are using and how many bits (eg Windows 7, 64 bit)
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.7 LTS
Release: 16.04
Codename: xenial
Which cloud storage system are you using? (eg Google Drive)
AWS S3
The command you were trying to run (eg rclone copy /tmp remote:tmp)
# rclone copy /tmp/toto backup-plaintext:my-bucket
A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)
Operation is failing with v1.53.1:
# ~/rclone/rclone-v1.53.1-linux-amd64/rclone --log-level DEBUG --dump bodies --retries 1 copy /tmp/toto backup-plaintext:my-bucket
2020/10/23 10:39:14 DEBUG : rclone: Version "v1.53.1" starting with parameters ["/root/rclone/rclone-v1.53.1-linux-amd64/rclone" "--log-level" "DEBUG" "--dump" "bodies" "--retries" "1" "copy" "/tmp/toto" "backup-plaintext:my-bucket"]
2020/10/23 10:39:14 DEBUG : Creating backend with remote "/tmp/toto"
2020/10/23 10:39:14 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/10/23 10:39:14 DEBUG : fs cache: adding new entry for parent of "/tmp/toto", "/tmp"
2020/10/23 10:39:14 DEBUG : Creating backend with remote "backup-plaintext:my-bucket"
2020/10/23 10:39:14 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2020/10/23 10:39:14 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:14 DEBUG : HTTP REQUEST (req 0xc00002dd00)
2020/10/23 10:39:14 DEBUG : HEAD /my-bucket/toto HTTP/1.1
Host: s3.eu-west-1.amazonaws.com
User-Agent: rclone/v1.53.1
Authorization: XXXX
X-Amz-Content-Sha256: XXXXXX
X-Amz-Date: XXXXX
2020/10/23 10:39:14 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:14 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:14 DEBUG : HTTP RESPONSE (req 0xc00002dd00)
2020/10/23 10:39:14 DEBUG : HTTP/1.1 404 Not Found
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Fri, 23 Oct 2020 08:39:14 GMT
Server: AmazonS3
X-Amz-Id-2: XXXXXX
X-Amz-Request-Id: XXXXXX
2020/10/23 10:39:14 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:14 DEBUG : toto: Need to transfer - File not found at Destination
2020/10/23 10:39:14 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:14 DEBUG : HTTP REQUEST (req 0xc0000f7700)
2020/10/23 10:39:14 DEBUG : PUT /my-bucket HTTP/1.1
Host: s3.eu-west-1.amazonaws.com
User-Agent: rclone/v1.53.1
Content-Length: 153
Authorization: XXXX
X-Amz-Acl: private
X-Amz-Content-Sha256: XXXXX
X-Amz-Date: XXXXXX
Accept-Encoding: gzip
<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LocationConstraint>eu-west-1</LocationConstraint></CreateBucketConfiguration>
2020/10/23 10:39:14 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:14 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:14 DEBUG : HTTP RESPONSE (req 0xc0000f7700)
2020/10/23 10:39:14 DEBUG : HTTP/1.1 403 Forbidden
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Fri, 23 Oct 2020 08:39:14 GMT
Server: AmazonS3
X-Amz-Id-2: XXXXXX
X-Amz-Request-Id: XXXXXX
f3
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>XXXXXX</RequestId><HostId>XXXXXX</HostId></Error>
0
2020/10/23 10:39:14 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:14 ERROR : toto: Failed to copy: AccessDenied: Access Denied
status code: 403, request id: XXXXXX, host id: XXXXXX
2020/10/23 10:39:14 ERROR : Attempt 1/1 failed with 1 errors and: AccessDenied: Access Denied
status code: 403, request id: XXXXXX, host id: XXXXXX
2020/10/23 10:39:14 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 1 (retrying may help)
Elapsed time: 0.4s
2020/10/23 10:39:14 DEBUG : 4 go routines active
2020/10/23 10:39:14 Failed to copy: AccessDenied: Access Denied
status code: 403, request id: XXXXXX, host id: XXXXXX
The exact same operation worked fine with v1.52.3.
# ~/rclone/rclone-v1.52.3-linux-amd64/rclone --log-level DEBUG --dump bodies --retries 1 copy /tmp/toto backup-plaintext:my-bucket
2020/10/23 10:39:34 DEBUG : rclone: Version "v1.52.3" starting with parameters ["/root/rclone/rclone-v1.52.3-linux-amd64/rclone" "--log-level" "DEBUG" "--dump" "bodies" "--retries" "1" "copy" "/tmp/toto" "backup-plaintext:my-bucket"]
2020/10/23 10:39:34 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/10/23 10:39:34 DEBUG : fs cache: adding new entry for parent of "/tmp/toto", "/tmp"
2020/10/23 10:39:34 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2020/10/23 10:39:34 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:34 DEBUG : HTTP REQUEST (req 0xc000291400)
2020/10/23 10:39:34 DEBUG : HEAD /my-bucket/toto HTTP/1.1
Host: s3.eu-west-1.amazonaws.com
User-Agent: rclone/v1.52.3
Authorization: XXXX
X-Amz-Content-Sha256: XXXXX
X-Amz-Date: XXXXX
2020/10/23 10:39:34 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:34 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:34 DEBUG : HTTP RESPONSE (req 0xc000291400)
2020/10/23 10:39:34 DEBUG : HTTP/1.1 404 Not Found
Connection: close
Content-Type: application/xml
Date: Fri, 23 Oct 2020 08:39:34 GMT
Server: AmazonS3
X-Amz-Id-2: XXXXX
X-Amz-Request-Id: XXXXX
2020/10/23 10:39:34 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:34 DEBUG : toto: Need to transfer - File not found at Destination
2020/10/23 10:39:34 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:34 DEBUG : HTTP REQUEST (req 0xc000185900)
2020/10/23 10:39:34 DEBUG : PUT /my-bucket/toto?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXXXX%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=XXXXX&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=XXXXX HTTP/1.1
Host: s3.eu-west-1.amazonaws.com
User-Agent: rclone/v1.52.3
Content-Length: 6
content-md5: XXXXX
content-type: application/octet-stream
x-amz-acl: private
x-amz-meta-mtime: XXXXXX
Accept-Encoding: gzip
salut
2020/10/23 10:39:34 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:34 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:34 DEBUG : HTTP RESPONSE (req 0xc000185900)
2020/10/23 10:39:34 DEBUG : HTTP/1.1 200 OK
Content-Length: 0
Date: Fri, 23 Oct 2020 08:39:35 GMT
Etag: "XXXXX"
Server: AmazonS3
X-Amz-Id-2: XXXXX
X-Amz-Request-Id: XXXXX
2020/10/23 10:39:34 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:34 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:34 DEBUG : HTTP REQUEST (req 0xc00071f100)
2020/10/23 10:39:34 DEBUG : HEAD /my-bucket/toto HTTP/1.1
Host: s3.eu-west-1.amazonaws.com
User-Agent: rclone/v1.52.3
Authorization: XXXX
X-Amz-Content-Sha256: XXXXX
X-Amz-Date: XXXXX
2020/10/23 10:39:34 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/10/23 10:39:34 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:34 DEBUG : HTTP RESPONSE (req 0xc00071f100)
2020/10/23 10:39:34 DEBUG : HTTP/1.1 200 OK
Content-Length: 6
Accept-Ranges: bytes
Content-Type: application/octet-stream
Date: Fri, 23 Oct 2020 08:39:35 GMT
Etag: "XXXXX"
Last-Modified: Fri, 23 Oct 2020 08:39:35 GMT
Server: AmazonS3
X-Amz-Id-2: XXXXX
X-Amz-Meta-Mtime: XXXXX
X-Amz-Request-Id: XXXXX
2020/10/23 10:39:34 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/10/23 10:39:34 DEBUG : toto: MD5 = 6aba532a54c9eb9aa30496fa7f22734d OK
2020/10/23 10:39:34 INFO : toto: Copied (new)
2020/10/23 10:39:34 INFO :
Transferred: 6 / 6 Bytes, 100%, 24 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 0.2s
2020/10/23 10:39:34 DEBUG : 4 go routines active
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 15 (5 by maintainers)
I don’t understand why rclone is trying to create the bucket when it could simply check to see if the bucket exists. And why the error returned is a permssions error and not an error that the bucket already exists. Maybe the “bucket already exists” exception is handled but the permissions one is not.
This seems like a bug to me and I would also like to see it reopened and fixed.
It would have been nice to leave this open, because it breaks the likely more common use-case of writing to an existing bucket in favour of convenience in less secure “sandbox” setups.
This is not checking for the bucket to exist, but the destination FILE. Rclone is not checking for the bucket to exist, at all. It’s just assuming it’s not there. @ncw
Rclone does try to see if the bucket exists. It uses HEAD bucket to see if a bucket exists. However this is a separate permission and if this is blocked then we get to the rclone trying to create the bucket.
So if you don’t want to use the no_check_bucket feature, then allow rclone to HEAD buckets.
The other alternative would be to list the bucket to see if it exists. However this is a more expensive API operation and is also likely to be blocked for a restricted user.
I would guess AWS check the permissions of what the user can do first before trying to do it. So if your user can’t create buckets then you get permission denied rather than bucket already exists.
The permission denied error can be caused by you not owning the bucket so when we get that we need to surface it to the user.
It is complicated, alas.