node-archiver: Fails to stream to S3 using AWS SDK v3

I need to stream a number of files from S3, zip them and stream them back to S3 as a zip file.

I had a previous working version using aws-sdk V2, but I’m attempting to now replace that implementation with V3.

Here’s a simplified example of some code to reproduce the issue I’m seeing. This correctly streams the input file down from S3, but then never seems to pipe any output back into the upload stream.

The console output will usually output the zip entry (inconsistently), but I never see any console logs related to upload, and no object gets created in S3. The command completes with exit code 0, but never outputs ‘done’ from the log statement at the end of the file.

import { PassThrough } from 'stream';
import { Upload } from '@aws-sdk/lib-storage';
import archiver from 'archiver';

const { AWS_REGION, S3_BUCKET } = process.env;
const s3FileKey = 'example.jpg';
const outputZipKey = `test.zip`;

const s3Client = new S3Client({ region: AWS_REGION });

(async () => {
  const s3UploadStream = new PassThrough();
  s3UploadStream.on('error', (err) => {
    console.error('error', err);
    throw err;
  });

  const s3Upload = new Upload({
    client: s3Client,
    params: {
      Body: s3UploadStream,
      Bucket: S3_BUCKET,
      ContentType: 'application/zip',
      Key: outputZipKey
    }
  });

  s3Upload.on('httpUploadProgress', (progress) => {
    console.log(JSON.stringify(progress));
  });

  const archive = archiver('zip', { zlib: { level: 0 } });
  archive.pipe(s3UploadStream);

  archive.on('entry', (f) => {
    console.log(f);
  });

  const downloadStream = (await s3Client.send(new GetObjectCommand({ Bucket: MEDIA_S3_BUCKET, Key: s3FileKey }))).Body;

  archive.append(downloadStream, { name: 'file.jpg' });

  await archive.finalize();
  await s3Upload.done();
  console.log('done');
})();

I suspect there is some incompatibility that’s been introduced with V3, and noticed there are some discussions relating to the version of readable-stream in archiver here: https://stackoverflow.com/questions/69751320/upload-stream-to-amazon-s3-using-nodejs-sdk-v3

Can anyone reproduce or provide a workaround?

Archiver 5.3.1 Node 18.16.0 @aws-sdk/client-s3 3.327.0 @aws-sdk/lib-storage 3.327.0

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 5
  • Comments: 15

Most upvoted comments

Also experiencing this issue.

I dug into it a bit. The problem seems to be that the underlying _finalize call is never made, so node recognizes this as a hanging promise that will never resolve and exits.

In Archiver.prototype.finalize, this._queue.idle() is false so _finalize is not called directly from there.

Changing the order of promises as @michael-raymond mentioned above seems to make it work. In this case, _finalize gets called when onQueueDrain fires. Without the promise reordering, however, this never fires normally.

Anyway that’s as far as I dug, hopefully this is helpful.

I ran into this today, and managed to get it working better, though perhaps not entirely. I don’t fully understand what’s happening, and came to the issues to look for an explanation.

If you start the S3 Upload promise first, but don’t await it, then append to the zip, then await the .finalize() call, then await the Upload promise, it will go faster.

...

(async () => {

  ...

  const uploadPromise = s3Upload.done()

  const archive = archiver('zip', { zlib: { level: 0 } });
  archive.pipe(s3UploadStream);

  ...

  archive.append(downloadStream, { name: 'file.jpg' });

  await archive.finalize();
  await uploadPromise;
  console.log('done');
})();

I also tried await Promise.all([archive.finalize(), s3Upload.done()]), which also worked for my case, but was slower. I was zipping ~90 small files, and it took 18 seconds if I started both promises at the same time, but only 11 if I started the upload promise before appending. I didn’t get any progress callbacks until after the finalisation had completed though. So I’m still confused as to what’s actually happening.

Try increasing the highWaterMark of your PassThrough stream.

I have the same problem. To me it looks it is not streaming at all. Increasing the highWaterMark in the Passthrough stream seemingly works only because it loads more data in memory. Once the size of the input files are larger than the highWaterMark / memory it fails again. I have an event handler for httpUploadProgress (I use the S3 SDK uploader: import { Upload } from '@aws-sdk/lib-storage') and all of those logs come at the end, indicating there is no streaming until archiver.finalize().

This gist helped me a ton after running into swallowed errors and my lambdas ending without any notice: https://gist.github.com/amiantos/16bacc9ed742c91151fcf1a41012445e?permalink_comment_id=3804034#gistcomment-3804034. Might help you too!

Try increasing the highWaterMark of your PassThrough stream.

Hello ! Is there any update on this issue? I am experiencing the same problem. Adding httpsAgent did not solve the issue.

I am testing on archiving s3 prefix which contains 200 objects.

I am having the same problem as well. For me it seems especially files with larger file size tend to cause this issue. For some small files it works fine. I hope this can be solved.

Similar problem here today. I solved this by adding httpsAgent in the s3Client config

const { S3Client} = require("@aws-sdk/client-s3");
const { NodeHttpHandler } = require("@aws-sdk/node-http-handler");
const https = require('https');

const s3Client = new S3Client({ 
    region: "ap-southeast-2",
    requestHandler: new NodeHttpHandler({
        httpsAgent: new https.Agent({
            KeepAlive: true,
            rejectUnauthorized: true
        })
})

I am facing very similar issue, where I am pushing files by zipping them to a sftp server, for small multiple files its working, but for large files, for example I have 3 files each one above 1.12 gb, then I am getting “Archive Error Error: Aborted”. has anyone tried such large files, or this library can’t handle such big files

I am having the same problem as well. For me it seems especially files with larger file size tend to cause this issue. For some small files it works fine. I hope this can be solved.