setup-node: Action timing out when new node versions are released

I am using this in a job:

name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
  with:
    node-version: ${{ matrix.node-version }}

and node-version 13.x, but my workflow takes 10+minutes just to complete use node. Any idea why or anyone else having this problem?

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 151
  • Comments: 41 (13 by maintainers)

Commits related to this issue

Most upvoted comments

Update: We are starting work to insulate action from issues like this. We want to both cache latest of every LTS directly on the hosted image (doesn’t contribute to your time) and another solution where we cache all versions on a CDN for actions to use. We may read through to the origin but that work will allow for much better reliability and perf in most cases.

This is a serious issue if one’s pipeline doesn’t have default timeout setup. This will cause a lot of people’s CI to hang for up to 6 hours. This is not wasting MS’ resource, but is wasting resource for other paying customers. Please fix this!


no code has been changed to match the timeline. it seems like some critical dependency has problem serving resources properly? but the issue can be fixed by re-running the build meaning potentially some bad hosts in the fleet or throttling?

Depending on what you need, you might skip this action completely:

All runners have node 12.16.1 preinstalled (e.g. https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md, search for “Node.js”)

Update: v2-beta is available which downloads from a releases cache for additional reliability. It falls back to node dist on miss or failure. We also added latest of each LTS directly on the image so if you reference a major version binding (‘10’, ‘12’, etc.) it should be virtually instant with no reliability issues from downloading.

More here: https://github.com/actions/setup-node#v2-beta

Note that this is the planned solution to this problem.

It’s being tracked in the node upstream repo https://github.com/nodejs/node/issues/32683

One of the major issues of this action is the fact that there is next to no logs that get printed. I brought this up in #90 for the version it’s using. But this is another situation where we need logs!!

Same here, We are using node 12.x.

Another alternative is to run your job in a container :

your_job:
  runs-on: ubuntu-latest
  container: node:10.9

@mieszko4 - yes, if you don’t need or want to validate on a specific version or version spec (range) of node, then you can rely on the one that’s globally installed on the machine image.

Note that the global version on the image will slide (13, then 14, etc.).

The core value of setup-node is using a specific version I want to test my code against lts 8, 10, 12, and latest in a matrix in parallel.

That said, we are fixing this issue now so that you can have the best of both worlds. You can lock to an LTS (12.x) and do matrix testing and never do a download and even cuts out the typical 10s of seconds.

If it’s not matching a major LTS, then it will download.

So part 2 of the work (also starting now) is to build a cache (backed by CDN) of versions so when we do read through, it hits our cache.

I might add a flag in the action to force reading through and getting absolute latest if you need that since there will be some latency on both the images and the cache.

@swanson Thanks for the note about setting timeout. 1.5x your typical time is a good suggestion. Very early we had a much lower default timeout but had the opposite problem.

Same here, It looks like I’m sometimes timing out and getting

Premature close Waiting 16 seconds before trying again Premature close Waiting 18 seconds before trying again ##[error]Premature close

and sometimes it’s working fine but still taking 7-11 minutes per install for some reason (12.x)

@amio - yes, this action downloads from the node distribution to the tool-cache folder. It does if it can’t match the semver version spec to what’s already in the tool-cache folder. In self-hosted runners, that cache fills up on use and gets faster over time. In hosted runners, it depends on whether that cache is prepped with likely hits (e.g. latest of every LTS major version line). Remember that hosted VMs are a fresh machine every time. Thus the mention of this issue above to prep that cache as part of our image gen process. Doing that will mean virtual 0 second resolution of 8.x, 10.x, 12.x specs and no reliability issues.

Part 2 of the work refers to a different cache. An off VM (CDN) where we cache all the node distributions so everyones workflows aren’t hitting the node distribution directly (which is not a CDN). We can also get better locality and more reliability from a CDN. This is essentially what was suggested in the node issue. This is not a trivial task but we’re working on it now. You can see the toolkit start here. The vm gen to populate that cache isn’t OSS yet. It’s early and we’re sketching it out.

So essentially, it will be (1) check local VM cache (great for self hosted, prepped on hosted) then (2) if no match locally, download from CDN then (3) if still no match, fall back to node dist.

Hope that helps clarify.

@konradpabjan Node itself is having an outage right now that is causing this - https://github.com/nodejs/node/issues/32683

If you’re finding this thread and blocked by timeout issues, two things you may want to try until this is resolved:

Cloned the repo and ran the unit tests, it would appear that the issue is with nodejs.org

::debug::Failed to download from "https://nodejs.org/dist/v10.16.0/node-v10.16.0-darwin-x64.tar.gz". Code(500) Message(Internal Server Error)

It’s not consistent though, a re-run of tests can get past this issue.

Experiencing similar issues with Node 12.x version. The step is either timing out or takes up too much time. (~25 mins)

image

good to know other people are having issues - seriously stalling my pipeline 😞

I had an issue with time out set up to 6h over month ago because my script hanged. Yes, it ate up all my minutes for the month. Who sets a default timeout to 6h?! For me it is a trap. I wrote email to github support hoping they could compensate me some minutes because 6h timeout default is not clearly written in the docs but they just redirected my to “Product Team” which did not reply to me since. Since then I set up all my jobs with a reasonable timeout.

And now/today I am wasting my time and github minutes for trying to run builds multiple times hoping it will pass without Unexpected HTTP response: 500 or ##[error]Premature close. I am really mad at this point.

@dentuzhik - I should have clarified that hosted images should be installing LTS globally so the real scenario would be sliding from 10 to 12 and then 12 to 14.

Also one more note - this setup task sets up problem matchers and auth. However that’s easy to use if you used setup-node without a node-version property. That combo will setup problem matches, proxies (if that’s relevant for toolset) but won’t download a node and use the one in the PATH globally.

I’ll update the readme with all of this but also note that one more variable is self-hosted.

Just browsing random pages on https://nodejs.org sometimes gives a 500 error as well, so it looks like this issue is indeed on their end

Hello @patrickk. Thank you for your report. Firstly the action will try use nodejs (LTS) from toolcache preinstalled on the hosted images. If the action do not match version it will try to download nodejs (LTS) from node-versions. If the action does not find a version in the node-versions, it’ll try to download it from https://nodejs.org/dist

Could you please create an issue with public repository or repro steps ?

Omitting this whole package made it super fast again:

name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
  with:
    node-version: ${{ matrix.node-version }}

It will have node 12.x by default, so no need to use this package when you don’t care about the Node.js version.

Just use

runs-on: ubuntu-latest

without any container.

@Raul6469 won’t work if you depend on the default dependencies in the Github containers like the AWS CLI. Although I think that’s something one should plan half a man-day for…

We have the same problem. Today every pipeline is failing because of this. We are using Node.js 12.x.