serverless-application-model: S3 Event triggers not working
Hi, I am facing an issue where Event is not being created and associated with Lambda, although it is specified in SAM:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example
Resources:
LogToWatch:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: python3.6
Timeout: 300
Policies: AmazonS3ReadOnlyAccess
Events:
S3CreateObject:
Type: S3
Properties:
Bucket:
Ref: TargetBucket
Events: s3:ObjectCreated:Put
TargetBucket:
Type: AWS::S3::Bucket
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 7
- Comments: 47 (7 by maintainers)
@phongtran227 @pierremarieB
You can add a
AWS::Lambda::Permission
to yourResources
. It should look something like the following:That worked for me.
If you use the S3 event in SAM, then you don’t see any trigger in the Lambda configuration panel. But the Lambda function is executed when you drop a file in the S3 bucket.
Just wondering why this is closed? I’m seeing the same issue. The bucket event source does not show up in the console. It makes it very confusing to know what is going on.
For those that are using the s3 template created by the cli, change the template for this one and the s3 trigger will be created. I also deleted the stack created by the old template and recreated the stack with the new one. Thanks @henrikbjorn
The S3 triggers are working but they are not appearing in the lambda console.
This is a breaking change, so we wouldn’t be able to do this unless we made a new version of SAM. I agree, though, that this should be fixed.
Lu’s comment (https://github.com/awslabs/serverless-application-model/issues/300#issuecomment-408950770) describes what needs to be changed.
Thank you.
There should be a note about this in the official docs. Just sent a feedback to them.
I get this problem too, when I deploy app to the aws, I config the s3 trigger, but I don’t see it on the aws console, I also tried the dynamodb or api gateway trigger and they can show on the aws console. I don’t know why?
Another question is I can’t set the bucket name when I config s3 event; I find the older doc config they can set the s3 bucket name. Is there something different between the old and now version ?
This doesn’t work for me because it complains about circular dependencies:
Used the provided template to reproduce the issue.
The Lambda permissions created by SAM looks like this:
But Lambda expects this in order to show the trigger in the console:
It is missing this part:
In the cloudformation template, the AWS::Lambda::Permission resource after transformed looks like this:
It is missing the SourceArn property. It should be something like this:
However the issue is that, currently SAM makes the permission not scoped to a specific bucket. If we are to fix it in SAM making it scoped to a specific bucket so that it can show up properly in Lambda console, this could potentially break existing customer who expects broader permissions.
Though we are not changing the existing behaviour, there is a simple way to deal with the problem gracefully.
First, let me briefly explain why we have not modified the existing Lambda resource policy and are not planning to do so. As mentioned above, the problem with Console not showing the trigger comes from the fact that the resource policy which is created by SAM on S3 Event generation does not restrict Lambda access to a single bucket. If we change the policy now, it will break working code for the customers who already rely on broader permissions, as mentioned in the referenced explanation and here.
Second, I’d like to recommend the way to narrow down the permissions so they will work with Console, avoid the Circular Dependency pitfall, and reduce the boilerplate. It is based on the ideas many of you have already figured out and it leverages SAM Connector resource we have recently introduced.
Instead of crafting
AWS::Lambda::Permission
useAWS::Serverless::Connector
which we introduced in September 2022. You can read more on Connectors hereTo guarantee that there is no circular dependency, hardcode your bucket name.
Here is an example, based on the one which was submitted when the issue was open
Notice how we have to set bucket name explicitly
And then reference Connector source by the ARN
P.S If you don’t have to stick to
AmazonS3ReadOnlyAccess
for the compatibility reasons, you can use another connector instead of it.Notice that
Source
andDestination
have exchanged their places and we requireRead
permissions for Lambda to S3 access.A connector will result in a more granular policy generated. Compare this one generated by connector:
To the one from
AmazonS3ReadOnlyAccess
:https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/79
@moosakhalid You also need access to the S3 bucket. I found that it is important to specify
Version
when specifying policies.I have only tried where i created the s3 bucket in the same CloudFormation template.
i am experiencing similar issue with SQS events. using SAM to deploy lambda with SQS event the lambda receives messages from the queue but the trigger is not visible via AWS console.
please note that when using the Events you are expecting SPECIFIC permissions. giving broader permissions is an issue not a feature. when creating a lambda using SAM and giving an S3 as a trigger i expect that only that S3 is able to trigger the lambda. giving broader permissions than that seems unsecured.
I’m seeing the same behavior as @bottemav