kubernetes: Eliminate shellcheck failures

What happened: After #24614 / #68438 we had 283 known failing *.sh files being excluded from shellcheck lints.

What you expected to happen: We should be linting all files. When files are listed in hack/.shellcheck_failures we may miss actual bugs in them. Initially to get the linter in place we had to list many files there to keep the PR reasonably sized, but ideally we can fix these over time and eliminate this file.

How to reproduce it (as minimally and precisely as possible): hack/verify-shellcheck.sh

Anything else we need to know?: I intend to continue working on this, EG #72861, #72955, …

/sig testing /assign

Known failing files TODO:

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 8
  • Comments: 129 (126 by maintainers)

Most upvoted comments

And BAM now #90442 is also merged, leaving hack/.shellcheck_failures an empty file. 👍

Since I just did this out of curiousity for hack/.golint_failures…

There have been 258 commits that touched hack/.shellcheck_failures in the past 2 years

shellcheck_failures linecount over time

Joy to the world, all fixes for cluster/gce/util.sh (#90405) are finally in place!

A bit of advice for those working on this: make it easy on your reviewers. Explain why you’re changing what you’re changing. “Fixing shellcheck failures” is not enough context.

Some details that could help your reviewer:

  • what shellcheck failure are you fixing?
  • why are you doing it this way?
  • why is this more maintainable than what was there before?
  • if you need to ignore a shellcheck failure, why? is there some other way of doing things that would accomplish the same goals and not trigger shellcheck?

https://github.com/kubernetes/kubernetes/pull/99816 removes the hack/.shellcheck_failures file and associated logic. No more failures allowed, CI will enforce this now*

* unless someone really needs to ignore something and adds a #shellcheck disable=... instead 🙃

Thank you so much all!

/milestone v1.20 I don’t think we’re quite going to hit 1.19 and that’s OK.

Sorry I’ve become ~inactive here, I’ve gained more duties and we’re struggling with test health right now, I don’t expect much time for this myself this release.

I do believe there is now actions going on for all remaining files in hack/.shellcheck_failures:

./cluster/gce/gci/configure-helper.sh #90433 ./cluster/gce/gci/configure.sh #90442 ./cluster/gce/gci/master-helper.sh #88582 (needs rebase, owner @gavinfish) ./cluster/gce/util.sh #90405 ./cluster/log-dump/log-dump.sh #88349

In the future please add me to the PR (/cc @BenTheElder for review, /assign @BenTheElder for approval if it’s reviewed and I am an OWNER of the relevant files).

I use https://gubernator.k8s.io/pr to monitor incoming PRs, those PRs seem to be assigned to others (who are not responding 😦)

I created PRs fix the following script files. Could someone review PRs?

  • ./cluster/gce/gci/flexvolume_node_setup.sh #81061
  • ./cluster/test-e2e.sh #80971
  • ./test/images/image-util.sh #80742

I don’t think we’ll make 1.16, I’ve not had quite as much time to work on these and some of the remaining ones are … challenging. We’re down to only 24 files though!

If you are looking to take on fixing an item on the list, please remember to go up half-way through this issue and expand the hidden items. There are a lot of comments and PRs of items that people are actively working on that will not show up when searching unless it is expanded. screen shot 2019-02-23 at 10 45 35 am

For those working on the the shellcheck issues, please do not put “fixes #72956” in your PR. This is an umbrella issue for tracking that shouldn’t be closed until all the shellcheck issues are resolved.

@BenTheElder would it be ok if I helped out with this? I was thinking I could go through and cover the scripts in ./cluster/centos/ to start with

/milestone v1.14

still may hit some test flakes, but I believe the remaining fixes are all LGTM+approve.

I think we should be landing these soon, well within 1.21.

Then we can follow up with making the verify-shellcheck script much simpler and faster (by not supporting known failures). ~~ https://github.com/kubernetes-sigs/kind/blob/master/hack/make-rules/verify/shellcheck.sh

Updated the checklist again, looking at outstanding PRs.

👋 Hi, Bug Triage here, we’re pretty late into the code freeze for 1.20 - I am going to put this in the 1.21 milestone

/milestone v1.21

+1, let’s try breaking it up a little. Sorry for the lack of input (“queue” gets long sometimes) and thank you for working on this!

Closing in now, log-dump.sh seems to be almost in. Current status of remaining files:

./cluster/gce/gci/configure.sh #90442 ./cluster/gce/gci/master-helper.sh #88582 (needs rebase, owner @gavinfish, now stale due to inactivity) ./cluster/gce/util.sh #90405 ./cluster/log-dump/log-dump.sh #88349

Updated the checklist, rebasing a PR I have open for log-dump.sh

It sure is pretty close now!

Updated the checklist, rebasing a PR I have open for log-dump.sh

anyone is free to work on anything, PRs welcome 😅

the question will just be getting them in. I apologize for my own review latency these days, have a bit too much going on.

On Wed, Feb 26, 2020 at 12:59 AM Joakim Roubert notifications@github.com wrote:

@joakimr-axis https://github.com/joakimr-axis Can I take some of them? ./cluster/gce/gci/master-helper.sh ./cluster/gce/upgrade.sh

That would be a question for the admins, I guess, but as for me, I am not working on those.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/72956?email_source=notifications&email_token=AAHADK2QWEULE6K73OCIHWDREYVPHA5CNFSM4GQKLPMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEM7LTVQ#issuecomment-591313366, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADKZNXX6HV4E7PLQSF7DREYVPHANCNFSM4GQKLPMA .

We’ve gained a new cluster/ approver or two since then, I’ll drop a note and see if we can get that approved finally …

On Mon, Feb 24, 2020, 01:41 Joakim Roubert notifications@github.com wrote:

updated the checklist, 9 to go! have a PR out for one of them #88349 https://github.com/kubernetes/kubernetes/pull/88349

@k-toyoda-pi https://github.com/k-toyoda-pi has #82059 https://github.com/kubernetes/kubernetes/pull/82059 for cluster/pre-existing/util.sh which seems to be stranded since November?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/72956?email_source=notifications&email_token=AAHADK2AGTVIVK72R2UVJ33REOI3VA5CNFSM4GQKLPMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMXE36I#issuecomment-590237177, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK7SVYUCF4PVFA4CKYTREOI3VANCNFSM4GQKLPMA .

This will go into 1.18 most likely. We have some new issues to fix with the next version of shellcheck WIP.

On Sat, Oct 26, 2019, 11:56 Kirsten notifications@github.com wrote:

Hi @BenTheElder https://github.com/BenTheElder

Bug Triage Release Team checking in. Code freeze for the 1.17 release is on November 18. Is this still intended for 1.17? Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/72956?email_source=notifications&email_token=AAHADK65MQNWSORQIGTS2G3QQSHFBA5CNFSM4GQKLPMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECKOQUQ#issuecomment-546629714, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADK2KHAWZDEJCKZP2LCTQQSHFBANCNFSM4GQKLPMA .

I list script files which lint failures are not fixed in PRs. Please refer to this.

./build/lib/release.sh ./cluster/clientbin.sh (fix a part of failure in #76152) ./cluster/gce/config-common.sh ./cluster/gce/config-default.sh ./cluster/gce/config-test.sh ./cluster/gce/gci/configure-helper.sh ./cluster/gce/gci/configure.sh ./cluster/gce/gci/flexvolume_node_setup.sh ./cluster/gce/gci/health-monitor.sh ./cluster/gce/gci/master-helper.sh ./cluster/gce/list-resources.sh ./cluster/gce/upgrade-aliases.sh ./cluster/gce/upgrade.sh ./cluster/gce/util.sh ./cluster/restore-from-backup.sh ./hack/cherry_pick_pull.sh ./hack/ginkgo-e2e.sh ./hack/grab-profiles.sh ./hack/lib/init.sh ./hack/lib/swagger.sh ./hack/lib/test.sh ./hack/lib/version.sh ./hack/update-vendor.sh ./hack/verify-import-boss.sh ./hack/verify-no-vendor-cycles.sh ./hack/verify-openapi-spec.sh ./hack/verify-readonly-packages.sh ./hack/verify-test-featuregates.sh ./test/cmd/batch.sh ./test/cmd/certificate.sh ./test/cmd/core.sh ./test/cmd/crd.sh ./test/cmd/create.sh ./test/cmd/diff.sh ./test/cmd/discovery.sh ./test/cmd/generic-resources.sh ./test/cmd/get.sh ./test/cmd/legacy-script.sh ./test/cmd/node-management.sh ./test/cmd/old-print.sh ./test/cmd/proxy.sh ./test/cmd/rbac.sh ./test/cmd/request-timeout.sh ./test/cmd/run.sh ./test/cmd/save-config.sh ./test/e2e_node/conformance/run_test.sh ./test/e2e_node/environment/setup_host.sh ./test/e2e_node/gubernator.sh

/reopen

I put out a diff this morning to address the following in hack: https://github.com/kubernetes/kubernetes/pull/74420

hack/verify-linkcheck.sh
hack/verify-pkg-names.sh
hack/verify-spelling.sh
hack/verify-staging-godeps.sh
hack/verify-symbols.sh
hack/verify-test-images.sh
hack/verify-test-owners.sh
hack/verify-typecheck.sh

Fixed by #74385

  • hack/generate-bindata.sh
  • hack/generate-docs.sh
  • hack/install-etcd.sh
  • hack/update-bazel.sh

I addressed all of hack/verify-generated-* in a PR this morning 😃 Can be found here: https://github.com/kubernetes/kubernetes/pull/74349.

I’ll work on the scripts in ./cluster/gce/

I will fix ./hack/dev-build-and-push.sh ./hack/dev-build-and-up.sh ./hack/dev-push-conformance.sh ./hack/dev-push-hyperkube.sh

@BenTheElder Can I help with ./build/common.sh

That would be great! there’s a lot to tackle here 😅

If you could reference this issue (just include a link somewhere, please don’t mark it as “fixes #” 🙃) that would be helpful for attempting to avoid overlap 😃 I’ll continue to do the same as I get to more of these.