kubernetes: Fix golint failures

Anyone itching to understand the code and try fixing things in Kubernetes, please help fixing the lint failures

Step 1: Remove one or two packages (related packages) from the list in - https://github.com/kubernetes/kubernetes/blame/master/hack/.golint_failures (note: do not change generated files, the name of defaulting functions, or the name of conversion functions… packages failing linting due to issues in those files will need to be left alone) Step 2: Run hack/verify-golint.sh and look at the logged errors Step 3: Fix the code to ensure that hack/verify-golint.sh runs cleanly. Step 4: Run hack/update-bazel.sh and hack/update-gofmt.sh to make sure we fix any problems with gofmt or bazel. Step 5: File a PR with the change Step 6: Go to step #1 😃

Please use the tracking spreadsheet here For everyone who is going to contribute,

  • please append the packages you picked at the end.
  • For existing contributor, please make sure this list is update to date

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 33
  • Comments: 121 (99 by maintainers)

Commits related to this issue

Most upvoted comments

After a couple years of seeing changes spawned from this issue, I don’t think this is a good use of contributors’ or reviewers’ time. golint is very opinionated about:

  • exported type and function names, leading to PRs attempting to change exported things people use downstream
  • missing documentation on interface implementation functions, which leads to not-very-useful copy/paste of interface godoc

the govet and staticcheck verify jobs are much more useful since they catch correctness issues, but I would recommend closing this issue and dropping the golint checks from CI

golint doc itself discourages from using its output in an automated enforced way:

from https://github.com/golang/lint#purpose:

The suggestions made by golint are exactly that: suggestions. Golint is not perfect, and has both false positives and false negatives. Do not treat its output as a gold standard. We will not be adding pragmas or other knobs to suppress specific warnings, so do not expect or require code to be completely “lint-free”. In short, this tool is not, and will never be, trustworthy enough for its suggestions to be enforced automatically, for example as part of a build process.

@yue9944882 +1 to slightly larger pulls with related packages. if allow too large then the reviewers will not be able to do a thorough review. we should spread the load among multiple reviewers. (if we allow a really large pull, we will end up with top level reviewers who will face the burden and the PR(s) will just stagnate)

@dims Created a spreadsheet with all the package that have on going PRs. Maybe you can add this to the summary.

  • For everyone who is going to contribute, please append the packages you picked at the end.
  • For existing contributor, please make sure this list is update to date

we get 800-ish pkgs left to fix. my concern is that if we fix them package by package, then such pulls will be flooding the repo. can’t we possibly make it in larger pulls? the larger the better?

thanks everyone for getting us this far. We essentially gave up on the existing structure/system for golint: https://github.com/kubernetes/kubernetes/pull/98063

At some point we can look at adding golangci-lint, there’s a WIP in progress in https://github.com/kubernetes/kubernetes/pull/93656

/close

@guineveresaenger looks like the disconnect is because that problem does not show up in the version of golint we use in the CI.

please see https://github.com/kubernetes/kubernetes/blob/master/hack/verify-golint.sh#L33-L34

hi @jlucktay thanks! there is a spreadsheet here that tracks the progress on fixing these issues, consider adding the packages you’re work on there so we can keep everything organized 👍 https://docs.google.com/spreadsheets/d/1VhU6zCk-vaAnbu_XkgsIq5I7jVnezUquWBMhcIN6AJM/edit#gid=0

I did indeed see the Google Sheet link (thank you @jonfriesen!) and hacked something together to take it a little further 😬

if we allow a really large pull, we will end up with top level reviewers who will face the burden and the PR(s) will just stagnate

oh good point. we could put one pkg’s change into one commit, and merge all changes under one OWNER area into one pull so that it wont spread the load. wdyt? for each pull, we can have hundred-ish lines of change. but it will still be 30~50 pulls i guess? good news is that we can take our time to deal with it, at least before 1.12 😉

Can someone please review and let me know if I’ve done this right? #97167

I’ve found a few others I’m actively looking at, you won’t get any argument from me.

On Sat, Oct 12, 2019, 10:38 AM Pamela Kelly notifications@github.com wrote:

Hi there, I’m looking for a good first issue. I’ll start taking a look at staging/src/k8s.io/apimachinery/pkg/types if that’s ok? There doesn’t seem to be anyone currently tracked as looking at it in the attached spreadsheet.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/68026?email_source=notifications&email_token=AM6RRQ3JKZFHTTTM3MATXQDQOIDQXA5CNFSM4FSJLRSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBCEQ2Y#issuecomment-541345899, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM6RRQ3F6UEKALSOMJYUSZTQOIDQXANCNFSM4FSJLRSA .

Hi I am a new contributor here, I’d like to contribute and make changes to the following file/s. I have marked it as WIP in the Google Sheet. Can someone let me know if I have to do anything else before I start working on this PR?

cmd/kubeadm/app

hi @jlucktay thanks! there is a spreadsheet here that tracks the progress on fixing these issues, consider adding the packages you’re work on there so we can keep everything organized 👍 https://docs.google.com/spreadsheets/d/1VhU6zCk-vaAnbu_XkgsIq5I7jVnezUquWBMhcIN6AJM/edit#gid=0

hi @dims I think this issue was closed prematurely by #73255 (using fixes #issue in commit)

I’m looking at the pkg/registry/*, all those which are left. I expect to give an update on what all is fixed in the next 24-48 hours.

Sorry I misunderstood the issue

@war-turtle the failures are still present in https://github.com/kubernetes/kubernetes/blame/master/hack/.golint_failures

there are details in the first message on how to claim + solve issues 😃

@dims should we have a checklist as part of the issue summary saying what is in progress and what is done? Since this issue is getting long, this will make new contributors to pick up issues easier.