aws-sdk-ruby: S3 presigned urls do not sign headers x-amz-server-side-encryption-aws-kms-key-id, x-amz-server-side-encryption

There’s a regression of presigned urls that use KMS encryption. I’ve upgraded from 2.0.39 to 2.1.7, and 2.1.7 is broken.

In 2.0.39, when I call

Aws::S3::Object#presigned_url(:put, { server_side_encryption: 'aws:kms', ssekms_key_id: MY_KEY_ID })

I get a url with ...&X-Amz-SignedHeaders=host%3Bx-amz-server-side-encryption%3Bx-amz-server-side-encryption-aws-kms-key-id&...

But when I use 2.1.7, those headers are missing: ...&X-Amz-SignedHeaders=host&...

and when I apply the headers to my PUT request on that url (as required: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html#put-object-sse-specific-request-headers), I get:

<Error>
  <Code>AccessDenied</Code>
  <Message>There were headers present in the request which were not signed</Message>
  <HeadersNotSigned>x-amz-server-side-encryption-aws-kms-key-id, x-amz-server-side-encryption</HeadersNotSigned>
  ...
</Error>

Rolling back to 2.0.39 fixes my problem.

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Comments: 26 (12 by maintainers)

Commits related to this issue

Most upvoted comments

@mcfiredrill Sorry to hear that you have to work around the problem by downgrading. So this is a known limitation from S3 side, we are actively working on a feature that allows flexible customizable presigned behavior in the long run. Meanwhile there is aws-sigv4 available for customized signing and presigned requests.

So here are some work-around with aws-sigv4

require `aws-sigv4`

# a s3 client at region 'us-west-2'
signer = Aws::Sigv4::Signer.new( 
  service: 's3', 
  region: 'us-west-2', 
  credentials_provider: client.config.credentials, 
  uri_escape_path: false, 
)

# create presigned url for an object with bucket 'a-fancy-bucket' and key 'hello_world' 
url = signer.presign_url( 
  http_method: 'PUT', 
  url:'https://a-fancy-bucket.s3-us-west-2.amazonaws.com/hello_world', 
  headers: {                                                                                                                             
     "Content-Type" => "audio/mpeg",                                                                                                        
     "x-amz-acl" => "public-read"                                                                                       
  } 
  body_digest: 'UNSIGNED-PAYLOAD', 
) 

# making request
body = ...
Net::HTTP.start(url.host) do |http|
  http.send_request("PUT", url.request_uri, body, { 
    "content-type" => "audio/mpeg", 
    "x-amz-acl" => "public-read", 
  })
end
# => #<Net::HTTPOK 200 OK readbody=true>

This issue appears to have resurfaced in 2.3.4.

s3 = Aws::S3::Resource.new(region:ENV['S3_REGION'])    
obj = s3.bucket(ENV['S3_BUCKET_NAME']).object("#{Time.now.strftime("%Y/%m/%d/%H")}#{SecureRandom.hex}")
url = URI.parse(obj.presigned_url(:put, expires_in: 36000, acl: 'public-read'))

Results in following request URL:

https://bucketname.s3-us-west-2.amazonaws.com/2016/05/13/110d88197423e3bd5d868528cdf89e2ac5?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AWSKEY/20160513/us-west-2/s3/aws4_request&X-Amz-Date=20160513T173731Z&X-Amz-Expires=36000&X-Amz-SignedHeaders=host&x-amz-acl=public-read&X-Amz-Signature=cb47754a8d22d35c9f696a552772e70793dfd3a0f44c0643cd3d530b2a8f4e1e

Which results in the error:

<Error>
<Code>AccessDenied</Code>
<Message>There were headers present in the request which were not signed</Message>
<HeadersNotSigned>x-amz-acl</HeadersNotSigned>
<RequestId>F81F5AE6AE7F66C8</RequestId>
<HostId>eR+lfCsQFb/vGUtc2A6bQI263Q2L9Jip5VU9FmhLWEg73+l0kvrxLfroPa0MfmUUvUI54O5XddE=</HostId>
</Error>

Downgrading to 2.0.39 solves the issue.

I noticed in 2.0.39, the url has a semicolon after the X-Amz-SignedHeaders? Is the lack of the semicolon what is causing the error in 2.3.4?

&X-Amz-SignedHeaders=host;x-amz-acl&X-Amz-Signature=00f4b33d45d314f63ed7e3a58c4a07bcd847fdd474d2f91f073a200bcd047c93

Reopening - deprecating usage of Feature Requests backlog markdown file.

i’m still having problems with this in aws-sdk-core 2.10.47 / aws-sigv4 1.0.2.

i have a policy on my bucket that requires the x-amz-server-side-encryption parameter (i understand i can also just turn on encryption in the bucket settings, but that’s beside the point). i’m trying to create a presigned url for a PUT request (uploading a PDF file) like:

my_bucket.object(new_key).presigned_url(:put, {expires_in: 60, content_type: 'application/pdf', server_side_encryption: 'AES256'})

and then upload with an ajax request like this, using a file object from a browser form:

$.ajax({url: presignedUrl, type: 'PUT', contentType: file.type, processData: false, data: file})

but i get a 403 when i try to execute the request. i see this in Aws::Signers::V4#presigned_url:

request.headers.keys.each do |key|
  if key.match(/^x-amz/i)
    params.set(key, request.headers.delete(key))
  end
end

so any param/header matching “x-amz-*” gets moved from the actual headers to the request params, and thus 1) isn’t present in the actual headers, and 2) isn’t included in the signed headers. this works fine if i remove that bucket policy, but with the policy in place, my understanding is that the server side encryption header must be included as an actual header and in the signed headers param. if i monkey patch the gem so it leaves the encryption header alone (i.e. comment out the snippet i pasted above) and add the header to my ajax request like this:

$.ajax({url: presignedUrl, type: 'PUT', contentType: file.type, processData: false, data: file, beforeSend: function(xhr) {xhr.setRequestHeader('x-amz-server-side-encryption', 'AES256')}})

then it uploads successfully. so it seems like the code moving the headers to params is a bit overzealous and needs to have some exceptions for certain parameters.

edit: i see this issue and the comments about aws-sigv4, but i thought that had just been absorbed into the main aws-sdk-ruby gem now, so i wasn’t sure if this was still expected.

edit 2: using Aws::Sigv4::Signer directly as @cjyclaire described above does seem to work. this whole thing was pretty frustrating/confusing to figure out, though.

Here’s what I get using mostly your example code (different bucket, object key, credentials):

opening connection to <bucket>.s3.amazonaws.com:443...
opened
starting SSL for <bucket>.s3.amazonaws.com:443...
SSL established
<- "PUT /123/456/secret?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Date=20150724T094043Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&x-amz-server-side-encryption=aws%3Akms&x-amz-server-side-encryption-aws-kms-key-id=...&X-Amz-Signature=c146b9e31433793f7ce1c498e4cf34f7c9e99b361e19bb2637af8e0307a76c78 HTTP/1.1\r\nAccept-Encoding: gzip;q=1.0,deflate;q=0.6,identity;q=0.3\r\nAccept: */*\r\nUser-Agent: Ruby\r\nConnection: close\r\nHost: <bucket>.s3.amazonaws.com\r\nContent-Length: 22\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n"
<- "secret-body-1437731099"
-> "HTTP/1.1 403 Forbidden\r\n"
-> "x-amz-request-id: E6C878ADCC13BDBF\r\n"
-> "x-amz-id-2: 3aXhZKqZnNRdh+3nE/DPIlscAn5VDcx7Li1CIvnbL5R6yyyKYYwkbXCSyTzauec3\r\n"
-> "Content-Type: application/xml\r\n"
-> "Transfer-Encoding: chunked\r\n"
-> "Date: Fri, 24 Jul 2015 09:44:54 GMT\r\n"
-> "Connection: close\r\n"
-> "Server: AmazonS3\r\n"
-> "\r\n"
-> "e7\r\n"
reading 231 bytes...
-> "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>E6C878ADCC13BDBF</RequestId><HostId>3aXhZKqZnNRdh+3nE/DPIlscAn5VDcx7Li1CIvnbL5R6yyyKYYwkbXCSyTzauec3</HostId></Error>"
read 231 bytes
reading 2 bytes...
-> "\r\n"
read 2 bytes
-> "0\r\n"
-> "\r\n"
Conn close
=> #<Net::HTTPForbidden 403 Forbidden readbody=true>

If I remove the bucket policy restriction described above, it works:

-> "HTTP/1.1 200 OK\r\n"
-> "x-amz-id-2: y8q3rybqs5+GmhKtOXB2CMLZ3m3AHONdjCEHETXfUr5sJ0qdxYNVopmRCHwo5bFe\r\n"
-> "x-amz-request-id: 1D8720EAF0710E9F\r\n"
-> "Date: Fri, 24 Jul 2015 09:55:57 GMT\r\n"
-> "x-amz-server-side-encryption: aws:kms\r\n"
-> "x-amz-server-side-encryption-aws-kms-key-id: ...\r\n"
-> "ETag: \"359201a47ab0c889dc9126b7ebc282b1\"\r\n"
-> "Content-Length: 0\r\n"
-> "Server: AmazonS3\r\n"
-> "Connection: close\r\n"
-> "\r\n"
reading 0 bytes...
-> ""
read 0 bytes
Conn close
=> #<Net::HTTPOK 200 OK readbody=true>

I need to have a bucket policy that restricts to aws:kms encrypted file uploads. So, is there a way to write a bucket policy that checks the URL parameters? If not, I need to use the headers.

So this change was made so that x-amz- headers would be sent via the querystring eliminating the need for users to provide these values twice. Before the change you would have to specify these values to the presigner and then again to your HTTP client to send them as headers. This was necessary because signature version 4 requires all header values that start with x-amz- to be signed, including their expected values.

When the change was made, it was intended to be a bug-fix so that a user could provide the values once to the presigner and have the URL querystring contain their values. This means the presigned URL no longer signs them as headers as they are part of the request URI.

Here is an example showing how to create a presigned PUT url with server-side-encryption and KMS and then upload something with Net::HTTP:

require 'uri'
require 'logger'

obj = Aws::S3::Object.new('aws-sdk', 'secret')

uri = URI(obj.presigned_url(:put, {
  server_side_encryption: 'aws:kms',
  ssekms_key_id: "cb467d40-59be-44d7-813f-d0f281092da8",
}))

http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.set_debug_output(Logger.new($stdout))

req = Net::HTTP::Put.new(uri.request_uri)
req.body = "secret-body-#{Time.now.to_i}"
res = http.request(req)

obj.ssekms_key_id # show that it is encrypted server-side

This means you should be able to simply remove those headers from your HTTP request and everything should work. Clearly this was an unintentional side-effect of the bug-fix, and a breaking change. The tricky part is if this is reverted then newer users would be broken (those counting on the values to be part of the URI and not currently sending them as headers). This is a sort of catch 22 and I’m not sure what the best path forward.

Thoughts?