super-linter: Unexpected linter update

Describe the bug Linter is running latest v3 tag (v3.4.0) despite being set to github/super-linter@v3.3.2.

To Reproduce Steps to reproduce the behavior:

  1. Use github/super-linter@v3.3.2 in GitHub action.
  2. Push code. It is then linted by latest v3 image (v3.4.0 currently).

Expected behavior Linting should run at the version I set it to (v3.3.2).

Screenshots Not applicable

Additional context Example Run I use github/super-linter@v3.3.2 over docker://github/super-linter:v3.3.2 for Dependabot updates. Introduced by #428 / 70f801e

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 34 (17 by maintainers)

Commits related to this issue

Most upvoted comments

@leviem1 lol yes, being a spokesperson for GH is a big responsibility that can be really difficult especially with the number of things we are working on. To dive into the docs you quoted with my own understanding:

more easily share containers in your organization

This means that you will be able to have a repo in the organization that hosts multiple organization wide packages.

set granular access permissions

I believe that this refers to being able to set scope of the visibility/permissions as documented here

anonymously access public container images

Meaning that you don’t need an authenticated connection but note that this is specifically for public containers and not private ones.

@alerque Also, check out the new thing in the Feature Preview named “Improved Container Support”. This might be a bit closer to what you’re looking for.

You can more easily share containers in your organization, set granular access permissions, and anonymously access public container images. Additionally, layers are shared between images across GitHub.

@zkoppert Sorry to treat you as the spokesperson for GH, but do you have any more info on this? Docs are a bit underwhelming as is expected for something that’s in preview. Also, thanks for taking a swing issue at this recently. I know there are bigger fish to fry but just knowing it’s on your radar is reassuring.

Also, for @alerque’s sake, can you please give us a “yes”, “no”, or “not allowed to say either way” on if there’s anything being worked on for using cached builds when specifying image: ‘Dockerfile’ in the action.yml? Are there any suggestions for other ways to achieve this behavior for the time being?

https://github.com/actions/runner/blob/007ac8138b0d7a6a0b236c8026906c84c78980e4/src/Runner.Worker/Handlers/ContainerActionHandler.cs#L47 is the code that builds the docker image everytime.

An issue at https://github.com/actions/runner/issues is probably relevant here.

In general for docker caching though, https://github.com/actions/cache/issues/31 might help.

Awesome, thanks so much! Unless you’re on call or bored or something, consider starting your weekend! Late night pushes are usually the messy ones 😉

@zkoppert This isn’t just a SuperLinter issue—although this is an egregious example of why hacking around the real problem creates other problems. The real issue is that the Actions runner doesn’t cache builds. Presumably this is because of security concerns of caching something triggered by some random repo. Just invert it and let the upstream action tag and prime the cache. The GH Actions runner should check if there is a Docker image matching the tag in the GH Packages repo for the action project. If it exists, use it instead of rebuilding the entire container from scratch.

Hey super-linter users and contributors, sorry for not jumping on this 🤦🏻 . I’m taking a look at this now and see if we can’t get this turned around.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

If you think this issue should stay open, please remove the O: stale 🤖 label or comment on the issue.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

If you think this issue should stay open, please remove the O: stale 🤖 label or comment on the issue.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

If you think this issue should stay open, please remove the O: stale 🤖 label or comment on the issue.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

If you think this issue should stay open, please remove the O: stale 🤖 label or comment on the issue.

@hoffmang9 Yes this is still very broken, and yes that workaround should still work.

Given the change on #428 is effectively the same to use github/super-linter@v3.3.2 than docker://github/super-linter:v3.3.2 it will always pull the image. Maybe we should allow the user to build the image if he wants to?

Is the workaround not to update action.yml each time a new release is made? You’re already generating Docker versions of each release: https://github.com/github/super-linter/packages/106548/versions

And a tag for each release: https://github.com/github/super-linter/tags

So if the tag’s action.yml contained an instruction which used the Docker’s specific version, you’d always get what you wanted (with the possible exception of master being one version behind I think).

@github-actions The only thing stale around here is you. Not only is this issue not fixed and very much still a problem, but the current hack that leads to this problem is to work around a serious shortcoming in Actions itself (that’s you) whereby one cannot use the default uses syntax and get cached builds — which is just crazy pants. The only way to do is is to use off-site Docker images. And even Github’s own package repository that should be a great way to do that is unusable because one can’t pull public images without authentication. Hence why this project of Github’s very own is using both a hack to not use the usual build process they force on everybody else and outsourcing the Docker part to a 3rd party.