checkout: Can't use 'tar -xzf' extract archive file

Download action repository 'actions/checkout@v3' (SHA:f43a0e5ff2bd294095638e18286ca9a3d1956744)
Error: Can't use 'tar -xzf' extract archive file: /home/runner/work/_actions/_temp_81e514cc-59b2-4af3-aa4d-6f8210ffde88/752d468e-f00d-474c-af3f-80fbe27b5318.tar.gz. return code: 2.

That hash points to v3.6.0

I do not know what to add here. https://github.com/nunomaduro/larastan/actions/runs/6074130772/job/16477441571

About this issue

  • Original URL
  • State: closed
  • Created 10 months ago
  • Reactions: 481
  • Comments: 76

Commits related to this issue

Most upvoted comments

GitHub Status says ✅ All Systems Operational

The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.

While we wait: is anyone watching something interesting?

actions/checkout@v4 - v4 works. It can be used as a quick fix.

Updating to v4 is not a solution for companies that have hundreds of workflows and reusable workflows build around the v3 version.

Breaking changes out of the blue are not as easy as “upsy, update to v4”

Still happening for us.

Still happening Can’t use ‘tar -xzf’ extract archive file: /home/runner/work/_actions… right now

And now it’s 503 errors

Warning: Failed to download action 'https://api.github.com/repos/actions/checkout/tarball/3df4ab11eba7bda6032a0b82a6bb43b11571feac'. Error: Response status code does not indicate success: 503 (Service Unavailable).

and an ongoing incident https://www.githubstatus.com/incidents/frdfpnnt85s8

move to v4 could be totally anecdotal

v4 is using node 20 by default so everybody updating should take that into account. Most tests will pass, but deployments - not so sure.

I’ll wait.

The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.

%100 correct.

it looks like cache propagation issue, depending on where you try following from:

wget https://api.github.com/repos/actions/checkout/tarball/f43a0e5ff2bd294095638e18286ca9a3d1956744
tar -xzf f43a0e5ff2bd294095638e18286ca9a3d1956744

you will either get files, or broken archive, i would guess no action from users, nor upgrade is needed 😃

This has been updated right now so I guess we have to wait:

image

GitHub Actions probably relies on https://api.github.com/repos/XXX/XXX/tarball/XXX, which is what is failing. It doesn’t just fail in GitHub Actions, but I ran into error: unable to download 'https://api.github.com/repos/NixOS/nixpkgs/zipball/57695599bdc4f7bfe5d28cfa23f14b3d8bdf8a5f': HTTP error 500 as well in Nix.

I can also confirm that the issue did/does exist.

I can’t speak for the v4 upgrade being a fix, but doing github “Retry Failed Jobs” a few times eventually allows the job to work (even while keeping the v3 tag). ✨

The incident has been resolved. 🤔

Party on 🥳

Confirmed, updating to v4 fixes the issue.

Apparently this resurfaced right now - within last ten minutes three different repositories linked to this issue and we see this happening on https://github.com/jupyterlab/jupyterlab/pull/15056.

Doesn’t appear to be completely resolved: it’s happening again here.

It would be nice if someone from github staff could actually confirm why it’s breaking, beyond just “This incident has been resolved.”.

Seems so - my pipeline in v3 works now 🎉

Interesting - network reliability aside, I wonder why actions/checkout isn’t retrying on network errors.

It’s not actions/checkout — it’s the Actions runner itself downloading and extracting all the used action packages before starting the workflow.

Yeah, this seems unrelated to the checkout action and more an issue on the Runner side.

Changing actions/checkout from v3 to v4 is not a fix, I’m getting random outcomes with either tags, some jobs work other fail.

Ted Lasso!

Still inconsistent/flaky for us, seems to happen for majority of runs.

Seems to be a transient network-related issue, based on previous instances.

#514 #545 #672 #673 #815 #948 #949

I’d imagine that a new incident will be announced soon, I’m keeping an eye on https://www.githubstatus.com/

It’s up: https://www.githubstatus.com/incidents/tqrmcdmghldw

I’d imagine that a new incident will be announced soon, I’m keeping an eye on https://www.githubstatus.com/

The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.

This is standard Github practice these days. We experience issues with deploying our code at least once every month. I never deploy without looking at https://www.githubstatus.com/ anymore. This is sad.

actions/checkout@v4 helps me. But is it safety to use?

We have the problem inside matrix builds, but not in our linting job which does not use the matrix declaration

Oh no… We need this to work today :#

EDIT: No I cannot just change to v4 for a production environment that needs to go live soon.

We just updated to v4 and getting the tar -xzf' extract archive file: /home/runner/work/... return code: 2. as well as in v3

Upgrading from v3 to v4 for large codebases is not an acceptable solution. Please fix v3 🙏

Plus, running actions from older branches will not be possible unless upgrading version on all branches of our repo.

Same issue here of course. Because of the way we use this action, I now have hundreds of pipelines broken using v3 and potentially 100s of developers blocked 🤦‍♂️

The second re-run (third attempt) worked for me: https://github.com/DawnbrandBots/bastion-bot/actions/runs/6074117346 I think it’s not related to specifically this repository but with GitHub itself serving the tarball.

image

happens on V4 aswell

The error continues to occur: Error: Can't use 'tar -xzf' extract archive file: /home/runner/_work/_actions/_temp_df1ed10d-c92b-4e3b-b959-9aabe0ae5fa0/b247af82-9e91-414c-9b2d-b00291508b18.tar.gz. return code: 2.

@hluaces Watching Black Mirror for the first time on Netflix at the moment; most episodes are very interestin, quite thought-provoking, and tech-related. But this issue is likely fixed after 1 ep 😛

Interesting - network reliability aside, I wonder why actions/checkout isn’t retrying on network errors.

It’s not actions/checkout — it’s the Actions runner itself downloading and extracting all the used action packages before starting the workflow.

There is ongoing incident on status page https://www.githubstatus.com/incidents/frdfpnnt85s8

Actions is still not working? Jeez guys, your 99.9% SLA has already been violated.

Seems like there’s another incident 😩

https://www.githubstatus.com/incidents/w01m57lwx8r3

Seeing this on builds still with v3

Looks like intermittent as a couple of rerun seems to make it work

Intermittent, but failing more often then succeeding.

Looks like intermittent as a couple of rerun seems to make it work

+1 Here, getting the error now. Is upgrading to v4 A) the best option and B) non-breaking ?

same issue here

It looks like it affects multiple (to say the least) users 😃

For a workaround, is anyone aware how to pin version used in a workflow to an earlier release?

up: I too can confirm that changing to checkout@v4 works.