serverless: BREAKING - Create build command and drop --noDeploy option
This is a Feature Proposal
Description
Some use cases, especially with CI/CD demand the packages to be used for a deployment to be created beforehand. After #2659 is merged we can create a separate build command with its own lifecycle that can then be included by the deployment command.
serverless build without any options will package all artifacts for deployment to Lambda and put them in the .serverless folder
serverless build -f hello will only package the specified function and put it in the .serverless folder.
Both package commands will overwrite the artifact configuration of each function to point to the newly created artifacts so other commands like deploy or deploy function can pick them up and release the artifact. This only works if build runs as a dependency of those other commands.
Future extensions:
``serverless build --output (or -o)` will package all artifacts and put the artifacts into the specified
Similar or dependent issues:
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 12
- Comments: 24 (16 by maintainers)
Some great points there @arabold. I totally agree about package and environment needing to be different - good to call it out.
While I can understand different APW GW stages sharing the same bucket (because they share the same API GW Project), I currently think it’s a broken model - I’m hoping to be proved wrong, but so far stages have been nothing but trouble for me.
I would go so far as to suggest that we might need to avoid API GW Stages! API GW is the only service that supports them, and it doesn’t translate well in to the other services.
Different “environments” (as opposed to stages) must have different supporting infrastructure (e.g. buckets, etc). No large organisation/enterprise will consider deploying non-production and production environments in the same AWS account or with shared resources; I would be concerned if a business of any size were doing it. Using different accounts to limit “blast radius” is simply the recommended way to do things in AWS, for a variety of reasons.
Unfortunately, right now the idea of stages and environments are mixed, and I think it complicates this (and other) problem spaces.
@arabold I’d be a bit leery of needing to rely on APIG stages. A lot of people deploy production in a separate AWS account as there are limits to the type of authZ you can do with IAM.
I like the idea of a 3 stage approach at least for the artifact bucket (this can be managed across AWS accounts fine with IAM) but I would think it would be important to not rely on APIG stages (what do they allow for anyway other than just sugar in AWS console UI?).
+1 for the ‘build’ command! However not a big fan of the ‘push’ command, sounds a bit like only uploading your build.
@mthenw FYI I’m only renaming the milestones once it comes up, basically because before the milestone is planned I don’t necessarily know if a breaking change will be happening in it. So in this case it will be renamed and will then be at least 2.0
It is worthwhile noting that alias support as outlined above has been implemented by @HyperBrain as standalone plugin. This addresses most of my initial concerns and I’d like to encourage everybody on this thread to help beta testing: https://github.com/serverless/serverless/issues/2411#issuecomment-283772297
I believe there is still a need for a
--buildvs--deploycommand line option though.Don’t want to drift too far off topic (which is splitting deploy into multiple steps) and I agree that you need to be wary when prod and dev isn’t strictly separated. “dev” and “prod” might not be good examples given by me in first place. Instead think of “staging” and “prod”, or blue-green deployment where you actually have to share some basic common resources. Or scenarios in which you want to do split testing between two stages. My reasoning behind having multiple stages with shared Lambdas and APIG is explained more in the comment here: https://github.com/serverless/serverless/issues/2411#issuecomment-267663782.
My point is that Serverless should allow certain flexibility and not tie CloudFormation and stages strictly together. I think we all agree that a deployment package needs to be independent from the target stage it’s gonna be deployed to.
@johncmckim
Just to be clear the CloudFormation template created will be parameterized right? At the moment there is no way for me to parameterize the template for re-use in all environments. For example domain name for a basepathmapping of an API can be different based on the environment, but at the moment I need to know the environment to deploy ‘before’ I create the CF template. I am guessing that would be changed and sls deploy --artefact x or sls push will take in the --parameters-override like CF deploy does?
@nikgraf looking forward to hearing it.
@pmuens not sure why they upload it to S3. I’d prefer that didn’t happen. We should be able to store artefacts in a place that suits our build pipelines. That might be S3, or it might be in Octopus Deploy, Team City, Jenkins ect. I think uploading to S3 makes more sense as part of the deploy stop rather than the package step.
@nikgraf thanks for the update!
I thought about that as well and my assumption is that they upload it because they need to have an S3 bucket in place. So they create one in order to replace the
CodeUriproperty in the template file and upload it right away (because they have the bucket anyway). Just an assumption but I guess that it has something todo with the bucket for deployment…hey @johncmckim, last week I started to work on proposal to tackle multiple issues at once and still have a consistent and sane experience. I presented it internally and some more ideas were coming in. I wanna share it asap, but needs some refinement first. Definitely going to share this here!
The issues we try to solve:
aws packagecommand packages, uploads and creates a new SAM file. Still need to investigate why they made this decision - input here is very welcome!)@johncmckim, for me, the option 2 would be much better. I’m currently working on a project where I deploy zip + CloudFormation files (using
--noDeploy) with Jenkins. Those files are then used later when the whole stack is deployed to AWS with Ansible.