serverless: s3 events can't refer to existing bucket
This is a Bug Report
Description
When specifying an s3 event, serverless will always create a new bucket. I would like to be able to specify an existing bucket defined in resources, e.g.:
functions:
myfunction:
handler: handler.handler
events:
- s3:
bucket: mybucket
resources:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: mybucket
Additional Data
Currently, that causes an error, because mybucket ends up being defined twice in the CloudFormation template.
$ serverless deploy
Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading service .zip file to S3 (3.85 KB)...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
.........................Serverless: Deployment failed!
Serverless Error ---------------------------------------
An error occurred while provisioning your stack: S3BucketServerlessexample
- mybucket already exists
in stack arn:aws:cloudformation:us-east-1:872755943855:stack/example-dev/06bf9a10-f42b-11e6-a0f4-500c221b72d1.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 31 (14 by maintainers)
Commits related to this issue
- aws s3 events: fix referencing existing bucket Previously, defining S3 events would always create a bucket resource in the CloudFormation template. Referring to an existing bucket would cause the buc... — committed to christophgysin/serverless by christophgysin 7 years ago
- aws s3 events: fix referencing existing bucket Previously, defining S3 events would always create a bucket resource in the CloudFormation template. Referring to an existing bucket would cause the buc... — committed to christophgysin/serverless by christophgysin 7 years ago
Hey @razbomi thanks for commenting š
A quick solution would be to overwrite the
HelloLambdaPermissionUploadS3resource as well.Something like this (untested):
Thanks for opening this up @chris-olszewski š ⦠I think your usecase is not really an āexistingā bucket, but rather a new bucket that youāre creating via custom resources.
You can merge any new configuration for any of our core resources (ie the S3 event resource) by adding a new resource under the resources section with the same core logical id. The framework will merge the default configuration and your updates together. So that means you can do the following:
Notice the same logical ID. The bucket name should end up with
overwrittenBucketNameinstead ofsomethingCan we transform this into a feature request? Setting a flag that enables/disables automatic bucket creation for S3 events would be a cool thing. A specified S3 event should just create the lambda permissions and events on the bucket.
Hm okay this could work as well. So we should document the āhow toā use an existing bucket with S3 events.Ah it is already in the master. https://github.com/serverless/serverless/blob/master/docs/providers/aws/events/s3.md#custom-bucket-configuration
thx @azurelogic
Opened a new issue here guys for anyone thatās interested ^_^
Iām not sure if the framework supports DependsOn or not - I havenāt seen it in any of the generated files in my projects.
The use case Iām trying to solve for with this configuration is being able to specify more of the s3 properties than what I can using the inline event syntax. While I havenāt seen anyone else trying to implement this for static sites like I am, it looks like others are dealing with the same underlying issue of how to specify additional s3 properties on the buckets created through serverless events. See #3309 for example
If thereās a better place to continue this discussion please let me know - if thereās a consensus on what the resolution for this category of request is Iād like to help implement it
@pmuens and @eahefnawy Please help⦠I am not sure why this is closed, maybe there is another ticket regarding issue (if there is pease point me to it)ā¦
But, If I try to use ālogical idā in the events section of the function (as suggested near the start of this thread), the generated function permission refers to the local id instead of the amazing bucket name.
Eg.
The generated cf includes
uploadin the arn instead of my bucket name:This causes cf to spit out the highly informative message
Unable to validate the following destination configurationson theS3BucketUploadresource.I thought this maybe related, any ideas or suggestions?
Hi @pmuens thanks!