serverless: S3 Event Compile Fails If Event Is Declared In Function And in Resources

Very probably related to https://github.com/serverless/serverless/issues/1823 (but that one was closed).

This is a Bug Report

Description

If I declare an s3 bucket as an event inside of a function (attached to that function), as well as declare it in the resources section, I get an error saying that <bucket-name> already exists in stack.

For bug reports:

  • What went wrong? If I declare an s3 bucket as an event inside of a function (attached to that function), as well as declare it in the resources section, I get an error saying that <bucket-name> already exists in stack.

  • What did you expect should have happened? I would’ve expected that it would’ve found the same logical bucket and not tried to create the bucket twice.

  • What was the config you used?

functions:
  first:
      handler: handler.first
      events:
        - s3: mybucket

resources:
  Resources:
    newCoolResource:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: mybucket
  • What stacktrace or error message from your provider did you see?
  Serverless Error ---------------------------------------
 
     An error occurred while provisioning your stack: S3BucketDawilcoxawesomeaxcadfasdf
     - dawilcox-awesome-axcadfasdf already exists in stack
     arn:aws:cloudformation:us-east-1:272016194640:stack/aws-nodejs-dev/3af04030-a527-11e6-ac5e-50d5ca632682.
 
  Get Support --------------------------------------------
     Docs:          docs.serverless.com
     Bugs:          github.com/serverless/serverless/issues
 
  Your Environment Information -----------------------------
     OS:                 darwin
     Node Version:       6.2.1
     Serverless Version: 1.4.0

For feature proposals:

  • What is the use case that should be solved. The more detail you describe this in the easier it is to understand for us.

I want to have a way to have the entire service own the bucket (as the bucket is used by multiple functions). Then, I want one of the functions to be triggered by the bucket (but not own it).

  • If there is additional config how would it look N/A

Similar or dependent issues:

Additional Data

  • Serverless Framework Version you’re using: 1.4.0
  • Operating System: Mac
  • Stack Trace: Shown above.
  • Provider Error messages: Shown Above.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 7
  • Comments: 22 (7 by maintainers)

Most upvoted comments

@eahefnawy

The problem is that I’m “evolving” a service. Particularly, what I’m doing is I’m showing some other people how serverless works. The first thing we’re doing is we create an S3 bucket to put files into. At that point, the S3 bucket in the resources section makes the most sense.

After we have that done, I want to hook up a function to be triggered when files are added. That’s an after-the-fact step that we didn’t consider at first. It’s evolutionary.

The problem with your response is that it assumes that everyone is going to have their serverless.yml the right way the first time they ever hit deploy. That’s just not reasonable (to me). People are going to want to make modifications to serverless.yml after the first time that they ever hit deploy (or at least the first time that they deploy an S3 bucket as a resource). One modification that they might want to do is they say (Oh! I have this S3 Bucket! Let’s trigger on it now!). There’s absolutely no way to evolve your service to do this without doing some awful hack on the client side by modifying the .serverless directory.

If modifying the .serverless directory is the vision of normal workflow, then I guess I can show 200 people how to do it in a 30 minute lab. That in itself seems pretty “hacky”, though.

@davidwilcox I followed the first recommendation from @eahefnawy to resolve the same issue. The answer is not to define an S3 event with the function…since serverless attempts to create a new S3 bucket…but to manually define the NotificationConfiguration in the S3 bucket resource, as well as a corresponding Lambda permission resource. (This solution relies on the CloudFormation naming convention used by serverless for Lambda functions.) In your case, it would look something like:

functions:
  first:
    handler: handler.first

resources:
  Resources:
    newCoolResource:
      DependsOn:
        - FirstLambdaPermissionMybucketS3
      Type: AWS::S3::Bucket
      Properties:
        BucketName: mybucket
        NotificationConfiguration:
          LambdaConfigurations:
            - Event: "s3:ObjectCreated:*"
              Function:
                "Fn::GetAtt": [ FirstLambdaFunction, Arn ]
    FirstLambdaPermissionMybucketS3:
      DependsOn:
        - FirstLambdaFunction
      Type: AWS::Lambda::Permission
      Properties:
        FunctionName:
          "Fn::GetAtt": [ FirstLambdaFunction, Arn ]
        Action: "lambda:InvokeFunction"
        Principal: "s3.amazonaws.com"
        SourceArn: "arn:aws:s3:::mybucket"

So - just wanted to chime in on this issue. I believe #2732 ultimately introduced a regression in that configurations that worked in serverless < 1.2, stopped working. Specifically, configurations that used a logical name for the S3 bucket defined as event sources for functions, and specifically named in the Resources: using S3Bucket{normalizedBucketName}.Properties.BucketName. The SourceArn on the permission now references the logical name, not the actual name.

Example:

functions:
  myFunction:
    events:
      - s3:
          bucket: mylogicalbucket
[...]
resources:
  S3BucketMylogicalbucket:
    Properties:
      BucketName: ${self:custom.vars.bucketIn}

Results in the following CF snippet:

    "MyFunctionLambdaPermissionMylogicalbucketS3": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "FunctionName": {
          "Fn::GetAtt": [
            "MyFunctionLambdaFunction",
            "Arn"
          ]
        },
        "Action": "lambda:InvokeFunction",
        "Principal": "s3.amazonaws.com",
        "SourceArn": {
          "Fn::Join": [
            "",
            [
              "arn:aws:s3:::mylogicalbucket"
            ]
          ]
        }
      }
    },

Instead of using Fn::Join for the SourceArn, can’t we just use Fn::GetAtt of S3BucketMylogicalbucket? Everything else seems to line up correctly, I’m just not understanding why we’re composing the SourceArn manually here.

Could you point me to the “couple of fixes for this in the meantime.” See a lot of discussion and PR but no fix for the event vs resource. Need to add CORS config on S3 bucket.

It turns out that the reason this is using a hard-coded string is because of circular dependencies. I think the only solution is to have everything that can be configured on an S3 bucket be configurable when defining the event source - like CORS - instead of needing to merge additional configuration in later.

@davidwilcox imo it would be pretty hacky to filter thorugh for all your custom resources and check if you’re referencing the same bucket.

You can do one of two things:

  1. You can create the event source mapping yourself completely and not declare an S3 event in the function config. If you do that you’ll need to provide ALL the required properties.

  2. OR, you can overwrite the CF template generated by Serverless for the S3 event. You do that by using the same logicalID of the S3 event with the help of this logical ID guide. If you do that you don’t need to provide all the properties, but only what you want to overwrite. The framework will just merge your changes.

For both of these cases, you can find all the info you need in the generated CF template in .serverless directory.

Out of curiosity though, why are you declaring the bucket as custom resources? It already gets created for you

Thanks for using Serverless 🙌