tusd: S3-Store: Slow transfer-speed in some environments

Question For a customer running Netapp StorageGrid as the s3-solution, we see slow transfer-speeds, around 2.5 Mb/sec. Same speed both for chunked uploads, as well as non-chunked uploads.

This is within the local network, so we would expect a lot faster transfers. Other applications running in the same network and using the same s3-bucket achieve the expected speeds.

In other environments, we see great speed with tusd, so I am not really sure what the problem might be.

Do you have any tips for how we might go about debugging the slow transfer-speeds? Or any hunches about what might be the cause?

In #344 it was mentioned that there were some ideas as to how to increase the efficiency of the transfers, without going into any details.

If you are able to share some of the ideas, I might be able put in some time to implement them and contribute to this project.

Setup details

Please provide following details, if applicable to your situation:

  • Operating System: Container-less destribution (linux), Kubernetes-environment.
  • Used tusd version: 1.1.0
  • Used tusd data storage: S3-compatible (specifically a NetApp StorageGrid
  • Used tusd configuration: custom-build, but here is the relevant s3-config used:
s3 := s3store.S3Store{
		Bucket:            bucket,
		Service:           service,
		ObjectPrefix:      objectPrefix,
		MaxPartSize:       5 * 1024 * 1024 * 1024,
		MinPartSize:       5 * 1024 * 1024,
		MaxMultipartParts: 10000,
		MaxObjectSize:     5 * 1024 * 1024 * 1024 * 1024,
		BaseConfig:        base,
	}

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Comments: 16 (8 by maintainers)

Most upvoted comments

Apologies for the late reply, @acj! Thank you very much for the insights and help!

More recently, we’ve implemented a version of WriteChunk that uses a producer-consumer pattern (decoupling client reads from S3 part uploads, with a configurable buffer) like you mentioned.

That’s amazing!

It hasn’t been production-tested yet due to other priorities

We have the master.tus.io instance which can also be used for a semi-production test.

I would be willing to open a draft PR in the meantime so that we can test and iterate on it together. What do you think, @Acconut?

That’s a good plan! I am very happy to assist you with it!

I was not referring to the chunk size settings on the client but instead talking about tusd’s internal chunk handling before uploading to S3

Oh, sorry. I misunderstood.

I could run some tests with

  1. Increase s3Store.MinPartSize to say 5gb for now. Ignoring resumability for now.
  2. Check if the current chunk is smaller than s3Store.MinPartSize, and if it is, use PutObjectWithContext.

Does this sound reasonable?