aws-sam-cli: Unable to execute multiple requests in parallel through sam local start-api

Template: AWSTemplateFormatVersion: ‘2010-09-09’ Transform: ‘AWS::Serverless-2016-10-31’ Description: An AWS Serverless Specification template describing your function. Resources: CheckoutLambda: Type: “AWS::Serverless::Function” Properties: Handler: “CheckoutLambda/index.handler” Role: redacted Runtime: “nodejs6.10” Timeout: 300 Environment: Variables: ENV: int Events: CheckoutApi: Type: Api Properties: Path: ‘/checkout/donate’ Method: post Executing two posts in parallel: curl -X POST http://127.0.0.1:3000/checkout/donate -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: 9f48e8af-a291-c6c6-6e9f-cc60a358a872' -d '5000' & curl -X POST http://127.0.0.1:3000/checkout/donate -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: 9f48e8af-a291-c6c6-6e9f-cc60a358a872' -d '4000' Output from aws sam cli: 2018/02/12 14:42:53 Invoking CheckoutLambda/index.handler (nodejs6.10) 2018/02/12 14:42:53 Invoking CheckoutLambda/index.handler (nodejs6.10) START RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 Version: $LATEST 2018-02-12T22:42:57.276Z 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 4000 START RequestId: 6d07c29d-da2e-14cc-2d1e-8996b0fccdcf Version: $LATEST 2018-02-12T22:42:57.539Z 6d07c29d-da2e-14cc-2d1e-8996b0fccdcf 5000 END RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 REPORT RequestId: 3c1c1ce5-fb4a-13be-4400-2fc7df6fed59 Duration: 4033.19 ms Billed Duration: 4100 ms Memory Size: 0 MB Max Memory Used: 31 MB 2018/02/12 14:42:59 Function returned an invalid response (must include one of: body, headers or statusCode in the response object): unexpected end of JSON input

Another thing to note is that the function then later times out: 2018/02/12 14:26:05 Function CheckoutLambda/index.handler timed out after 300 seconds

handler(index.js) `function sleepFor( sleepDuration ){ var now = new Date().getTime(); while(new Date().getTime() < now + sleepDuration){ /* do nothing */ } } function formatResponse (statusCode, body) { return { statusCode: statusCode, body: JSON.stringify(body) }; } exports.handler = (event, context, callback) => { console.log(event.body); sleepFor(JSON.parse(event.body));

callback(null, formatResponse(200,‘test’)); };` This is a contrived example that I could share. But I’m running into this problem when trying to create integration tests for my lambda function. Ideally I’d like to be able to execute many tests in parallel against the locally spun-up api so that it mirrors the actual api. Thanks for anyone who looks into this! 😃

PS: Sometimes it works as expected. Race condition?

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 16 (1 by maintainers)

Most upvoted comments

Parallel Requests are now completely supported as of v.0.3.0. We even have integ tests that exercise this 😃

Closing as this was addressed.

to avoid re-building of container for each request you can use --warm-containers LAZY flag

sam local start-api --warm-containers LAZY

I’m not trying to make concurrent connections while not sure how logs will be handled. I like how lambda provides consistent log of the request and there are never logs shuffled from several requests. But when adding this flag container is built once and API becomes pretty snappy

image

Issue with this is if you send multiple requests at once and some other request is running, call fails. So address that i have this funny setup now. Shell script which just runs 10 instances and then UI is picking next port for each call when url from config is coming with {{RANDOM_LAMBDA_PORT}} as port placeholder. For my app 10 is more than enough, so basically having 10 concurrent lambdas

sam local start-api -p 11000 --warm-containers EAGER &
sam local start-api -p 11001 --warm-containers EAGER &
sam local start-api -p 11002 --warm-containers EAGER &
sam local start-api -p 11003 --warm-containers EAGER &
sam local start-api -p 11004 --warm-containers EAGER &
sam local start-api -p 11005 --warm-containers EAGER &
sam local start-api -p 11006 --warm-containers EAGER &
sam local start-api -p 11007 --warm-containers EAGER &
sam local start-api -p 11008 --warm-containers EAGER &
sam local start-api -p 11009 --warm-containers EAGER &
sam local start-api -p 11010 --warm-containers EAGER

i know it’s hard work around but it works fine, once all 10 ports are hit at least once they all start to response in ms not seconds

Yeah it’s really difficult to develop and test an API locally when it can only handle 1 request at a time. That, coupled with the fact that each requests takes 4-8 seconds to complete (container has to spin up for every request), makes the whole process painful.

The code is storing per-request information in a context that spans multiple requests, so there is weird/broken behavior if you issue multiple requests at the same time. I’ve put together a cursory PR (sorry, haven’t written much go) at https://github.com/awslabs/aws-sam-local/pull/304

Would be really nice to have this functionality, or a queue system to handle multiple requests.

@akomiqaia Here is the command I use: sam local start-api --docker-network=my-docker-network --host 0.0.0.0 --port 3001 --debug --template-file ./my.template.json --docker-volume-basedir ./build

Eventually I workarounded my problem by using a custom docker network instead of the host.