tusd: When using deferred length with S3 compatible service, last part of multi-upload is ignored

Describe the bug I use tusd with an S3 compatible service (Scaleway). When I upload a file by chunks with Upload-Defer-Length to 1, the last part is not concatenated with the previous parts.

See following screenshot, obtained with chunk size of 5 Mb.

sceenshot-s3-multi-upload

Here is the content of .info file:

{
 "SizeIsDeferred": false,
 "PartialUploads": null,
 "Storage": {
  "Key": "d484497fc33fbd87bf6d698a01a14e69",
  "Bucket": "essentials-test",
  "Type": "s3store"
 },
 "Offset": 10485760,
 "IsFinal": false,
 "ID": "d484497fc33fbd87bf6d698a01a14e69+NmQxZTY2NDUtYWQxYi00ZWMyLThlMmMtN2M4OThiZDU2Zjli",
 "IsPartial": false,
 "Size": 10691054,
 "MetaData": {}
}

Setup details

  • Operating System: Linux
  • Used tusd version: 1.3.0 docker
  • Used tusd data storage: AWS S3
  • Used tusd configuration: -s3-endpoint https://s3.fr-par.scw.cloud -s3-bucket essentials-test --hooks-dir /srv/tusd-hooks
  • Used tus client library: tus-js-client

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 16 (11 by maintainers)

Most upvoted comments

I have identified the root cause, but unfortunately that patch is a bit more complex and I am lacking a bit of time right now. I hope to be able to work on this in maybe two weeks.

PR opened +1

❤️

I can’t reproduce that behavior after fixing the first issue.

Oh, I see. I thought that S3Store.WriteChunk would not be called for an empty body, but that’s not the case. So, you are completely right and your fix patched everything 🎉 Thank you very much!

@Acconut Yep, I remember from when we worked on it together a few years ago 😄 (#219)

We’ve been depending on deferred-length uploads to S3 in production, so the feature was working in a previous version of tusd. I’m not sure when it got broken.

I investigated this a bit today, and I think the problem is that the s3Upload struct’s FileInfo field gets out of sync (specifically Size and SizeIsDeferred) when we call DeclareLength here:

https://github.com/tus/tusd/blob/d152c5bbf8122f484ed6efb91fd0d0a9a2f49efd/pkg/handler/unrouted_handler.go#L560-L567

We update info with the correct values there, but they aren’t propagated to the upload struct itself. That confirms the behavior that @ecoessentials mentioned earlier (which was a very helpful tip).

I noticed that DeclareLength’s receiver is a value type. If I modify it to be a pointer (to match the semantics of WriteChunk) like so:

-func (upload s3Upload) DeclareLength(ctx context.Context, length int64) error {
+func (upload *s3Upload) DeclareLength(ctx context.Context, length int64) error {

… then everything works as expected. The upload completes, and the S3 object has the correct bytes.

So, it seems like a subtle interaction between UnroutedHandler and S3Store. If my fix above seems reasonable, I’ll open a PR for it. Do you have any suggestions for how capture this in a test?

@Acconut I’m running into this as well after upgrading one of our services from pre-1.x. Can you share what the root cause is?

If it would be helpful, I’m willing to work with you on preparing a fix, testing, etc.