StreamSaver.js: Save Failure - Network Error (due to Service Worker Restart)

Great work on StreamSaver.js! I’m trying to use this to download and save a large file but I am consistently running across a problem with Chrome where the download fails about 10 minutes in with “Failed - Network Error”. I’ve done a good amount of debugging and see that Chrome, which manages the Service Worker lifecycle, seems to kill/restart the service worker every 10 minutes or so. Have you run into this issue?

I was able to reproduce it using the your example.html file by simply slowing down the lorum ipsum text generator as follows (I added a 1000 ms timeout):

writer.write(text).then(() => { setTimeout(pump, 1000) })

You’ll notice that when you generate 30GB, the save fails about 10 minutes in. Make sure you do not have Developer Tools running as Chrome doesn’t restart the service worker when developer tools are open (for troubleshooting purposes).

Any thoughts on how to get around this problem?

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 32 (19 by maintainers)

Commits related to this issue

Most upvoted comments

Transferable Streams - Woop woop!

Transferable where introduced in chrome v73 I have tested it out in canary and it will ship with the next chrome version (v72 now)

This now means that a Transform stream are created, the writeable stream are given to you, and the readable stream is transfered to the service worker. This solves a lot. The service worker respond directly with the readable stream and have a 1:1 communication instead of using postMessage

That also means you can now pass in a queueing strategy into the mix and actually use it, like so:

var strategy = new ByteLengthQueuingStrategy({ highWaterMark: 32 * 1024 })
var fileStream = streamsaver.createWriteStream('filename.txt', strategy)

then when the buffer is full writer.write will return a promise that actually waits now to be resolved when the readable stream can take on more data

await writer.write(something) // resolved when bucket is not full

it also means that if you pipe a really large file like this

res = await fetch('huge_file')
res.body.pipeTo(fileStream)

… you can pause the download and it will actually tell the browser to back of b/c you have to much data in the bucket. so you can download as fast as you can write to the disk.

What else? Oh yea it might mean that the service worker don’t have to be alive to transfer all chunks of data with postMessage to the output stream that goes to the disk (it just signals the readable stream to the disk and lets the core do its thing) that might also mean that a service worker reload might not affect the download!!! but i have not tested that yet

Perhaps we can close this issue now (or soon)

@Epsilon> hmm. just tested sending a simple postMessage/heartbeat to sw, seems to work pretty well.

Then i did the same test by sending a postMessage/heartbeat from a MessageChanel that i had passed onto the sw every 5th sec but that restarted the service worker anyway. This service worker restart could possible have been avoided if i had decided to use regular postMessage instead of using MessageChannel - but i wanted to solve the problem for those who didn’t had https on there site

I could have the default live mitm.html ping the sw.js every 4.5. That could solve it for everyone that uses https on there site and haven’t changed the mitm url to something else, this group of users wouldn’t have to lift a finger.

The others that uses http must rely on a popup wish is immediately closed after transferring the MessageChannel to the sw.js. this group of user must send a dummy ajax request to the sw to avoid having the popup being open all the time. this group of user needs to update streamsaver.js (that also means those who use a custom mitm)

Edit i tested this and it did seem to work fine on localhost. but when i tested it on plunkr it got destroyed after 5 minutes - seems like the only way is to host the service worker yourself.

updated to use a simpler for loop (incase the list contains same item multiple times)

PS you can’t use the stream polyfill. native ReadableStream from fetch don’t work together with the polyfills WritableStream

@kgrzesiu

There is a easier way now to save multiple streams using pipeTo

const links = [
  URL.createObjectURL(new Blob(['fragment1'])),
  URL.createObjectURL(new Blob(['fragment2']))
]

const fileStream = streamSaver.createWriteStream('multifiledownload.txt')

for (let i = 0, len = links.length; i < len; i++) {
  const res = await fetch(links[i], this.options)
  const isLast = i + 1 === len
  await res.body.pipeTo(fileStream, { preventClose: isLast })
}

What i have heard is that the service worker won’t restart if the service worker is hosted on your own domain rather then using the one that is hosted on github pages.

I’m quite busy right now with work and other stuff. But i can maybe take a look at it in the weekend. I will create a new issue for partial saving #46

@bhnyc solution didn’t work.

But I just thought about another solution/idea that involves partial download. That way the service worker can still restart and manage to save the file. I think it may even work for Firefox.

I don’t know if it works with unknown file size. Need to completely refactor the code (switch from using streams to 206 partial response in sw.js)