serverless: s3 events can't refer to existing bucket

This is a Bug Report

Description

When specifying an s3 event, serverless will always create a new bucket. I would like to be able to specify an existing bucket defined in resources, e.g.:

functions:
  myfunction:
    handler: handler.handler
    events:
      - s3:
          bucket: mybucket

resources:
  Resources:
    Bucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: mybucket

Additional Data

Currently, that causes an error, because mybucket ends up being defined twice in the CloudFormation template.

$ serverless deploy
Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading service .zip file to S3 (3.85 KB)...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
.........................Serverless: Deployment failed!

  Serverless Error ---------------------------------------

     An error occurred while provisioning your stack: S3BucketServerlessexample
     - mybucket already exists
     in stack arn:aws:cloudformation:us-east-1:872755943855:stack/example-dev/06bf9a10-f42b-11e6-a0f4-500c221b72d1.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 31 (14 by maintainers)

Commits related to this issue

Most upvoted comments

Hey @razbomi thanks for commenting šŸ‘

A quick solution would be to overwrite the HelloLambdaPermissionUploadS3 resource as well.

Something like this (untested):

resources:
  Resources:
    S3BucketUpload:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.bucket}
    HelloLambdaPermissionUploadS3:
      Type: AWS::Lambda::Permission
      Properties:
        SourceArn: arn:aws:s3:::${self:custom.bucket}

Thanks for opening this up @chris-olszewski 😊 … I think your usecase is not really an ā€œexistingā€ bucket, but rather a new bucket that you’re creating via custom resources.

You can merge any new configuration for any of our core resources (ie the S3 event resource) by adding a new resource under the resources section with the same core logical id. The framework will merge the default configuration and your updates together. So that means you can do the following:

functions:
  myfunction:
    handler: handler.handler
    events:
      - s3: something

resources:
  Resources:
    S3BucketSomething:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: overwrittenBucketName

Notice the same logical ID. The bucket name should end up with overwrittenBucketName instead of something

Can we transform this into a feature request? Setting a flag that enables/disables automatic bucket creation for S3 events would be a cool thing. A specified S3 event should just create the lambda permissions and events on the bucket.

Hm okay this could work as well. So we should document the ā€œhow toā€ use an existing bucket with S3 events.

Ah it is already in the master. https://github.com/serverless/serverless/blob/master/docs/providers/aws/events/s3.md#custom-bucket-configuration

thx @azurelogic

Opened a new issue here guys for anyone that’s interested ^_^

I’m not sure if the framework supports DependsOn or not - I haven’t seen it in any of the generated files in my projects.

The use case I’m trying to solve for with this configuration is being able to specify more of the s3 properties than what I can using the inline event syntax. While I haven’t seen anyone else trying to implement this for static sites like I am, it looks like others are dealing with the same underlying issue of how to specify additional s3 properties on the buckets created through serverless events. See #3309 for example

If there’s a better place to continue this discussion please let me know - if there’s a consensus on what the resolution for this category of request is I’d like to help implement it

@pmuens and @eahefnawy Please help… I am not sure why this is closed, maybe there is another ticket regarding issue (if there is pease point me to it)…

But, If I try to use ā€œlogical idā€ in the events section of the function (as suggested near the start of this thread), the generated function permission refers to the local id instead of the amazing bucket name.

Eg.

custom:
  bucket: myawsomelyamazingpartyhardbucketwhatevs

functions:
  hello:
    handler: handler.hello
    description: Organises amazing hello upload bucket parties
    events:
      - s3: upload

resources:
  Resources:
    S3BucketUpload:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.bucket}

The generated cf includes upload in the arn instead of my bucket name:

    "HelloLambdaPermissionUploadS3": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "FunctionName": {
          "Fn::GetAtt": [
            "HelloLambdaFunction",
            "Arn"
          ]
        },
        "Action": "lambda:InvokeFunction",
        "Principal": "s3.amazonaws.com",
        "SourceArn": {
          "Fn::Join": [
            "",
            [
              "arn:aws:s3:::upload"
            ]
          ]
        }
      }
    },

This causes cf to spit out the highly informative message Unable to validate the following destination configurations on the S3BucketUpload resource.

I thought this maybe related, any ideas or suggestions?

Hi @pmuens thanks!