faas: Technical support request about timeouts

My actions before raising this issue

Expected Behaviour

I have a very simple function that allows uploading a file and puts it into S3 and create a document for it in a database. All timeouts are set to 21600 in the yml file so also large files of around 2GB or more should be no problem.

Current Behaviour

When uploading a 2GB file the upload suceeds after 7 minutes, but then it seems that this data is passed on internally to the container running the function which seems to take more than 60 seconds, which in return results in following error:

openfaas-prod/gateway-6b5565cb76-qnsb5[gateway]: 2019/10/10 14:22:41 error with upstream request to: , Post http://upload.openfaas-prod-fn.svc.cluster.local:8080?property=5a9ecaec01d1644a84b8653d: context deadline exceeded
openfaas-prod/gateway-6b5565cb76-qnsb5[gateway]: 2019/10/10 14:22:41 Forwarded [POST] to /function/upload?property=5a9ecaec01d1644a84b8653d - [502] - 60.010386 seconds

within the container I then receive that the request was abortet. That is why I concluded above.

openfaas-prod-fn/upload-79887b545b-52wl7[upload]: 2019/10/10 14:22:41 Upstream HTTP request error: Post http://127.0.0.1:3000/?property=5a9ecaec01d1644a84b8653d: unexpected EOF
openfaas-prod-fn/upload-79887b545b-52wl7[upload]: 2019/10/10 14:22:41 stderr: Error: Request aborted

Possible Solution

I already set the environment variables I know to a higher timeout, but it seems it does not help. Maybe there is another one that fixes this?

      read_timeout: 21600s
      write_timeout: 21600s
      upstream_timeout: 21600s
      exec_timeout: 21600s

Steps to Reproduce (for bugs)

Using following command for uploading the file curl -X POST -F 'files=@/largefile.avi' https://backend/function/upload\?property\=5a9ecaec01d1644a84b8653d

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ): CLI: commit: 25cada08609e00bed526790a6bdd19e49ca9aa63 version: 0.8.14

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)? Kubernetes on Digitalocean

Next steps

You may join Slack for community support.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 20 (14 by maintainers)

Most upvoted comments

@lobermann it occurs to me now while debugging this, you are trying to set the timeouts on your function, but the configuration that controls this error is actually on the gateway. These lines in your sample will not change the timeout used by the gateway https://github.com/lobermann/openfaas-upload-issue/blob/09570c20e5c5839f4f85e7ba85d98dd5d0475e17/function.yml#L12-L17

You need to set this value in the environment of the gateway’s pod

Since you are deploying to k8s, if you are using the Helm chart, you can see the configuration values here https://github.com/openfaas/faas-netes/tree/master/chart/openfaas#configuration, you want to skip to the section starting with gateway.readTimeout

After testing this locally, i was able to control the timeout and have very slow functions run with success, as long as I had them finish before the timeout value I had set.

If you are using helm, can you try deploying using --set gateway.readTimeout= 21600s? I think you will see the function start succeeding on the files you were testing before.

But as Alex was pointing out, you will generally have more success uploading directly to s3 and then sending the object key to your parsing function, instead of uploading very large files directly to a function.