serverless: Skip resources if already exist
This is a (Feature Proposal)
Description
For bug reports:
- What went wrong?
I had a bug that cloudformation just stuck at UPDATE_ROLLBACK_FAILED. So I should to delete the stack and deploy serverless again. But now I got another problem:
Serverless Error ---------------------------------------
An error occurred while provisioning your stack: AvailableDynamoDbTable
- Available already exists.
- What did you expect should have happened?
I think that database is too critical in production level to wont use Retain, In a simple wrong deploy or remove the stack can banish all your tables, wrong deploy can easily rollback but data is really critical.
So I suggest to have something like: serverless deploy --skipResources, so it wlll skip the resources that already exist and cloudformation wont bring that error.
Similar or dependent issues:
Additional Data
- Serverless Framework Version you’re using: 1.6.1
- Operating System: Mac OS El Capitan
- Stack Trace:
- Provider Error messages:
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 269
- Comments: 120 (31 by maintainers)
Commits related to this issue
- Fix resource name, https://github.com/serverless/serverless/issues/3183\#issuecomment-407473966 — committed to fernando-mc/techpayrates by fernando-mc 5 years ago
If the data is too important to delete, you probably shouldn’t be managing the Table resource in your service definition - it belongs outside, either in a “resource-only service” (if you want to use
slsto manage it), or in a completely different CFN template.As I said, a bug anywhere could happen, as it did for me. So its not a wrong pattern of management. It can and do happen. So this feature will save a lot of headache if this happen in a prod env.
I would be happy for this feature to exists as well
Wish I knew what half this shit ment.
I am up for “skip if exists”
Ah, what you need is
SkipIfExists: Truein your yml file. Oh wait, the folks at Senseless don’t support this.Great thanks for all the input. I’m going to close it, please read carefully the reasoning:
What we deal with here, is not a limitation of a Framework per se, but limitation of CloudFormation through which Framework deploys configured services.
While what’s being requested seems now “kind of” possible with CloudFormation (via combination of handling of
DeletionPolicyand introduced not far ago import resources capability). Trying to tackle this generically, seems far from trivial. It may require tons of work (and new issues to fight with), as already observed by @kennu.Due to implied complexity this doesn’t seem as right direction. It seems more reasonable to agree that resources as configured with a Serverless service are inseparable part of a service and are meant to also be removed with service (when we remove it with
sls remove). For cases when we do not find that acceptable, we should rather configure resources in question externally. Note that Framework in many places allows to attach to existing (created and configured externally) resources.In scope of internal team we also put a lot of effort into Serverless Components, which are not backed by CloudFormation, so do not share its limitations. In its context we attach to eventually already existing resources on deploy, as it’s being requested here.
@kennu has made a plugin for deploying additional CF stacks with Serverless.
Another option could be to import existing resources. Allow serverless to hook into existing infrastructure to start working on it.
Terraform has a similar feature: https://www.terraform.io/docs/import/index.html
Unfortunally serverless lack this very basic feature to painlessly redeploy the applications without using another tool. I think by default it should not create if exists, and delete and recreate only if i say so. SO it is less destructive and you get confidence to use in production, without createing a HUGE damage for seconds fo attention lack
I just ran into this issue by accidentally deleting the wrong Serverless app (luckily a dev version from the wrong branch and not the production app). Our DynamoDB tables all have
DeletionPolicy: Retainfor exactly these situations. However because the table was already there I could not re-deploy the app.Here’s how I worked around the issue:
This way I was able to retain all the data and got the CloudFormation stack to work properly again. I confirmed this by making a change to the ProvisionedOutput of one of the tables and then deployed, which worked as expected.
I know this is a bit unconventional as you’re not supposed to touch CloudFormation controlled resources manually, but I would imagine if this would happen to someone in production environment this workaround might be a real life saver.
And for the feature proposal: I agree with what many have said here that it doesn’t make any sense to declare the same resource in multiple Serverless apps. However for my particular case, ie. removing the stack and then later trying to re-deploy it, the feature proposal to fix this for all resource types sounds way too big as the root cause for this is CloudFormation’s DeletionPolicy: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
I think the only feasible way would be to create custom Serverless plugin for each resource type to migrate the existing data somehow when the stack is re-created (using the backups like I did can be pretty slow as it can take ~4 hours). Something like that could also be used for “branching” apps so you could for example migrate the DynamoDB table data to the new app. And if you start talking about data migration then you might want to think about migrating data when you need to change the table keys or join two tables together etc… (thinking of something like FlyWay but for DynamoDB). And btw by feasible I mean “possible but a huge amount of work!” 😁
+1 😦
S3 Buckets and Dynamo Tables almost never get deleted. To have it as a hard requirement that serverless fails if they already exist does not make sense. We get around this by using Ansible to deploy these resources, but the idea that Serverless can’t handle this scenario makes no sense. Yes, lambdas should get destroyed and recreated everytime, because they are code, but buckets and tables are not.
I agree, a feature to “skip creating resource if it already exists”, especially for applications that require a database to exist, but need the data to persist over the lifespan of the application. It’s unreasonable to assume that a new version of the application should implicitly replace or wipe out data, and it’s also unreasonable to require a new database table for every release.
I encounter similar problems with databases, as well as IAM roles and policies for specific functions or services or resources to communicate as needed in an application. If I amend a policy or a role, and deploy my changes, the old policy should be removed/replaced/changed, and those changes should apply to the services the role is attached.
its been 3 years now since this issue was opened. Could we please have this basic functionality implemented? I mean it’s obviously that a lot of people want and need this.
I’m pretty new to the serverless project, been lurking. But I use cloudformation almost daily, and was curious if the serverless deploy could have a complimentary argument named ‘update’.
Since it’s basically just generating a cloudformation template, that template can be applied as an update instead of a deploy.
An update will compute the difference between what is already present and notify you of the changes before taking action.
Apparently, my situation is already taken care of. I missed this in the docs.
Apologies for the noise.
Where should I add this flag in the yml file? I didn’t find any doc for using this flag. Could you please provide an example?
Given Serverless is a private company Do you guys have any open governance structure for the framework? Like RFCs that will let the Serverless user base tackle discussions like this a bit more formally? cc @pmuens
@rowanu
I don’t think thats a valid reason not to implement it. Shouldn’t the community try to find a collective solution to this?
I too am having a problem with this.
I have a lambda service which subscribes to a SNS topic created, and written to, by a server resident service. I am attempting to user ServerLess™ to manage this lambda, but I get the following error in deployment:
I understand, from #1842, that ServerLess™ is failing when it attempts to create the topic.
It makes sense to me that the service that writes to the topic should create the topic. Having the subscriber(s) create SNS topics, especially in cases such as mine, where the topic is widely subscribed to, seems sub-optimal. In this case ServerLess™ should skip the creation of the pre-existent resource.
Hey, I just realized that CloudFormation does have support for importing existing resources into a stack: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html
In AWS SDK it’s implemented with the ResourcesToImport parameter to createChangeSet. I belive Serverless Framework doesn’t use createChangeSet(), but instead calls updateStack() directly, so this would require a large refactoring.
Sorry about the wrong information I said earlier about CloudFormation not supporting this.
Again, I disagree with those asking for this feature. Serverless makes a CloudFormation template and deploys it. It should not make a different template depending on the current state of your environment. That negates much of the benefit of infrastructure-as-code.
Closed without a solution or am I missing something here?
Even if it is not a real solution, I’ve found a workaround that is working for me, hope it can help.
When deploying a stack containing existing resources (for example dynamodb tables left untouched by a retention policy) it will fail but it will create a cloudformation stack and a bucket containing the cloudformation template.
From the cloudformation console it will be possible to manually import the failed resources, as specified here https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html
After completing all the steps it will be possible to successfully deploy from serverless as normal.
linz/qgis-plugin-repository#132
+1 to @jthomerson 's comment on May 4 which summarizes a pitfall of implementing this feature. I’d like to add to the crowd for a call for a skip feature. But, I would love for it to be an explicit option rather than a default. Currently, it seems that the multiple stacks/Fn::ImportValue route is the best option, with the unfortunate drawback of stack coupling 😦
@bwship but as @kennu mentions - and as I describe above - this is not a failing of the Serverless framework. Serverless is (to oversimplify) a convenience wrapper around CloudFormation. It’s CloudFormation that is saying “you can’t make something with that name because there’s already something with that name”.
That’s exactly the behavior that most CloudFormation users expect - and need. Anything else would cause non-deterministic behavior, which is contrary to the goal of infrastructure as code.
You have a couple of options for dealing with this:
Your suggestion would break things in subtle ways for people. Imagine you deleted a stack, and the bucket was left there because it had a retention policy that made it stay when the stack was deleted. Now you try to re-create the stack. In this scenario, I absolutely want CloudFormation to fail - to tell me that I’m not creating a new table or bucket, but already have one there with that name. That safeguards me from thinking that the stack deployed cleanly, making nice, new resources, but in reality it’s my old resources left around. But under your suggestion, this would be a silent failure - I’ve got some old table or bucket with whatever old state it had just laying around and now I think my nice, new, service deployed but it’s really not.
Ok, here is another use case for this feature (hence another user being bitten by this problem 😄). Our use case is as follows: We have an SNS topic that acts as a broadcast mechanism for several things, some of them are lambdas. To avoid losing any message we put an SQS queue between the SNS topic and the lambda. This SNS to queue mapping is only useful for this lambda because it includes filtering and it works like a buffer for the lambda. So the most reasonable thing was to create the queue and subscribe it to the SNS topic on the same serverless file where we declare the lambda that is going to consume it. It does not make any sense to create a separate stack just to map the queue to the SNS topic and then import the queue on the lambda file when we can do everything on the same file, making the relationship much more obvious and making sure that the required resources will get created no matter to which environment you are going to deploy… because as many here do, we have like a dozen environments and creating resources manually before deployment is not an option.
Having a separate stack for the resources is not only less convenient, but it has the same problems as declaring them on the same file as the lambda: what if we want to add another queue? SLS is going to re-create the entire “resources” stack instead of just pushing the new stuff.
I would like to hear a solution to this situation
Also hitting the problem. Use case: simple service with dynamodb, want to create if not existing, else update or at least ignore, for several environments. We are looking for simplicity - it is a pain to do this by hand, and complicate the deployment with extra steps…
that’s rather sad, can’t we just have a flag, which says 's skip if exists? Shouldn’t be too hard…
Hello all, I am ramping up on Serverless, and have already experienced a need to reference an existing resource as an event trigger to my function.
After thoroughly reviewing this thread, it is apparent that more documentation from Serverless is required to handle everyday scenarios.
For anyone who is just starting with Serverless, the resources section is used to define NEW resources you want to create as part of your services roll out. Only use it to define new resources. If you start using the resources section to reference existing resources you are creating resource versioning issues. This is why the Op suggested -skipResources option would not be a good solution.
Example: You have two serverless.yml files both defining a shared Resource, both with slightly different properties whether on purpose or by accident. Which was is the correct one? Or let say both Resource definitions are the same, but you want to change the properties, now you will have to update the Resource definition in two places. Not a good approach.
The better approach is the have a master or a base serverless.yml define all Resources once, and have a serverless variable or use the Fn:ImportValue in all subsequent serverless.yml to reference the ARN of the existing Resource.
I am constantly running in this issue aswell. It prevents continuous deployment on my system. I created a stackoverflow discussion on how to handle tables that block updates: http://stackoverflow.com/questions/43771000/how-to-migrate-dynamodb-data-on-major-table-change/43790256#43790256 Maybe that helps you guys to implement a better handling of table deployment in this framework.
@hermanmedsleuth then why is my
serverless.ymlfile an interface to CF and not just a subset of it? It looks like something partially implemented right nowI agree with @rowanu here. If you want to put a further level of safety onto your resources you should put them into a separate resource only CF stack, export the resource names and import them via Fn::ImportValue in your function stack. This also guarantees that the resource stack cannot be deleted as long as it is referenced anywhere. A CF stack naturally owns its resources and makes sure that everything is created/changed/updated in a transactional way. BTW: You should not specify a TableName property with DynamoDB tables as this prevents any changes that need Replacement like changing the keys, etc. A better way is to grab the tablename via Ref where you need it and publish it through environment variables to your code.
did it was solved?
@muk98 - As outlined in closing post by @medikoo, this is a very challenging from implementation perspective and is a limitation of a CloudFormation that Framework currently uses that cannot be bypassed without a significant effort. The best workaround in my opinion is separating your core services that should be retained into a separate Framework service and referencing these in your services that can be fully removed. You can manage such setups in a more convenient way e.g. with Serverless Compose functionality: https://www.serverless.com/framework/docs/guides/compose
While I agree CloudFormation should handle the resource existence related issues, I’d like there to be a functionality in serverless where I can easily pass a flag for skipping specific resources so it would not be included in the CF template to begin with.
@jthomerson I don’t think anyone has any issue with the “happy path”. I can deploy a thousand times and never hit an issue.
But then there are outliers like @jasonmccallister s case, or for instance my own where I added unrelated line and got an error regarding dynamodb. And to be honest right now I’m again battling the same demon. I added apiKey to my function, just seems like it wasn’t mentioned clearly that you cannot reuse the key currently between stages. That seems to have triggered some other change on the CF which in the end resulted in dynamo resource errors. Right now I have issues with dynamodb tables that until now properly deployed. I’m going to spend the next x amount of time trying to figure out something that really hasn’t changed a bit and should just deploy my code, yet it won’t.
Mind you I really had a blast hundred of other times deployed worked just perfectly 😃 and I’m grateful for those times.
What do you suggest as the best way to resolve those issues that block serverless?
Hi this is a good point and maybe I’m doing something wrong in this case.
use case is:
we constantly deploy updates to the stack, which technically recreate it. Like add new lambda => redeploy.
in our CI system, commit is pushed and ‘sls deploy --stage test’ is called, than all integration tests are run and so on.
thanks,
g.
On Mon, May 4, 2020 at 9:04 PM Jeremy Thomerson notifications@github.com wrote:
–
Lead Developer - Fiehnlab, UC Davis
gert wohlgemuth
work: http://fiehnlab.ucdavis.edu/staff/wohlgemuth
linkedin:
https://www.linkedin.com/in/berlinguyinca
Same with SQS queues, it should be possible to just print a warning and not have everything crash. Right now we have several severless files in different directories for deploying the stack, creating resources, etc. This is just a bit counter intuit. Could we just have a simple plugin maybe?
On Mon, May 4, 2020 at 7:21 PM Jeremy Thomerson notifications@github.com wrote:
–
Lead Developer - Fiehnlab, UC Davis
gert wohlgemuth
work: http://fiehnlab.ucdavis.edu/staff/wohlgemuth
linkedin:
https://www.linkedin.com/in/berlinguyinca
I agree, when Serverless creates a stack, it should be able to gracefully handle cases where a resource already exists, and import it as appropriate. Ideally, to prevent breaking existing applications, I would recommend only importing resources for which a DeletionPolicy tag exists, follow the DeletionPolicy when the stack is removed, and only throw an error when the resource already exists and the DeletionPolicy tag is not defined. This would prevent unexpected importing and deleting of important resources, while also not requiring a flag for this functionality.
A Serverless deployment’s resource configuration should allow me to create the resources that don’t exist while utilizing the ones that already do.
@jasonmccallister Thank you for your input! I reached exactly the same issue right now - using
resourcesto create S3 bucket and it seems that there’s no option to skip creation if an bucket already exists.Yea, an interesting thing about this problem in terms of Ansible and Serverless is that I am trying to do the following. Create Dynamo tables in Ansible (which works fine, except that you can’t declare streams) And then in serverless declare appSync that uses the dynamo tables as the mutations. I then have lambdas in serverless that listen to the dynamo streams. So, I thought great, I can declare those tables in serverless as Resources because serverless does allow you to initialize the stream that then lambda will then listen to, but then it fails like mentioned above because the resource already existed. To recap, Ansible should really have the ability to define a stream. And serverless should really act like ansible, in that if a resource already exists, it just does an update of that resources settings.
I ran into this issue today and wanted to document what I found.
TL;DR I think that extra spaces, or even commented out code, under resources makes Serverless (or CloudFormation?) think it needs to create a brand new resource.
Steps I am using resources to create an S3 bucket. I want to ensure that resource exists and is the same for the project - regardless of environment. So the bucket name will always be
some-asset-bucket.When I add a new resource, for example adding a new SNS topic, Serverless will always try to create the S3 bucket again. So I removed the SNS topic and still had the issue.
I then performed a Git reset to before I made the SNS topic and it deployed without giving me the error.
If anyone is interested I fixed my particular use case using this plugin: https://github.com/SC5/serverless-plugin-additional-stacks
@ali-himindz The brute force approach I use removes the entire
resourcekey fromserverless.ymlat deploy time (before restoring it).(Requires the python
yqtool,jqandmoreutils).I think
--skip-custom-resourcesis not a solution because, as said, the CF will delete resources not included in the update. The ultimate solution is to handle database via a separate cloud formation stack. In this case there is still problem that you cannot update the same table with new indexes but this is a known problem and it is not coupled to deployment of lambda functions then.My only concern right now is that without defining my tables in serverless.yaml I am not able to use the serverless-dynamodb-local plugin. This is what I am looking for right now: running a local dynamodb using a custom CF template.
P.S. I guess I need smth like this https://github.com/steven-bruce-au/dynamodb-local-cloud-formation
It makes sense, to me, to consider, and handle, trigger resources as exogenous to the CF stack. That a CF stack is expected to manage a resource that incipiates the instantiation of itself, strikes me as poor design choice. A lambda function is not always the entirety of an application. It is (probably most) often a service to a larger application.