kubernetes: fix staticcheck failures

Many of our packages are failing staticcheck, we should fix all of them. See previously https://github.com/kubernetes/kubernetes/issues/36858 https://github.com/kubernetes/kubernetes/issues/90208 https://github.com/kubernetes/kubernetes/issues/81657

/help

To fix these, take a look at hack/.staticcheck_failures for a list of currently failing packages.

You can check a package by removing it from the list and running make verify WHAT=staticcheck. Once the package is no longer failing, please file a PR including the fixes and removing it from the list.

Before filing your PR, you should run hack/update-gofmt.sh and hack/update-bazel.sh to auto format your code and update the build.

I recommend keeping PRs scoped down in size for review, fix a package or set of closely related packages, or a certain class of failures, to make it easier for your reviewers to handle.

We don’t need to fix all failures in one PR, but please avoid 100s of single character PRs, as it does cost time & resources to get each PR reviewed, tested, merged etc. 🙃

I can help review some of these, but many of them will require other reviewers that own the relevant code.

EDIT: I also don’t recommend /assign ing to this one, many people are going to need to work on it and we want more people to join in. You can link to this issue in your PR and comment here that you’re working on certain packages to help avoid duplication. EDIT2: Please do NOT put Fixes #... in your Pull Request, despite the template. That will close this PR. Instead put something like Part of #92402

I think each of these own code that is currently failing: /sig testing /sig storage /sig api-machinery /sig architecture /sig cli /sig instrumentation /sig autoscaling /sig cloud-provider

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 45 (32 by maintainers)

Commits related to this issue

Most upvoted comments

We’re close to resolving this issue - all remaining packages in hack/.staticcheck_failures are covered by PRs. 🙂

pkg/kubelet/cm/cpuset                                       103415
pkg/util/flag                                               103416
test/e2e/apimachinery                                       103417
vendor/k8s.io/apimachinery/pkg/util/json                    103417, 105813
vendor/k8s.io/apimachinery/pkg/util/strategicpatch          103417
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates      99572
vendor/k8s.io/apiserver/pkg/server/filters                  99572
vendor/k8s.io/apiserver/pkg/server/routes                   
vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope  106022, 105813
vendor/k8s.io/apiserver/pkg/util/wsstream                   103131, 105813
vendor/k8s.io/client-go/rest                                99142, 105813

Edits 11/01/2021: Added PR ref for vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope 10/05/2021: Removed packages that have been fixed by 103023 and removed references to closed 102899 07/01/2021: Added additional issues which have been brought up by 103256 06/25/2021: Added PR ref for vendor/k8s.io/apiserver/pkg/util/wsstream and removed vendor/k8s.io/apiserver/pkg/storage since 100771 was merged

Sorry, didn’t notice this before.

EDIT: I also don’t recommend /assign ing to this one, many people are going to need to work on it and we want more people to join in.

@m-Bilal Yes! go ahead as @tiloso mentioned, there is still vendor/k8s.io/apiserver/pkg/util/wsstream needed to be fixed

Hello, I tried to narrow down the list of packages without active PRs as of now, hope this is useful for anyone who wants to contribute:

test/integration/scheduler_perf
vendor/k8s.io/apimachinery/pkg/api/apitesting/roundtrip
vendor/k8s.io/apimachinery/pkg/apis/meta/v1/validation
vendor/k8s.io/apimachinery/pkg/util/httpstream/spdy
vendor/k8s.io/apimachinery/pkg/util/net
vendor/k8s.io/apimachinery/pkg/util/sets/types
vendor/k8s.io/apimachinery/pkg/util/strategicpatch
vendor/k8s.io/apimachinery/pkg/util/wait
vendor/k8s.io/apiserver/pkg/admission/initializer
vendor/k8s.io/apiserver/pkg/authentication/request/x509
vendor/k8s.io/apiserver/pkg/endpoints
vendor/k8s.io/apiserver/pkg/endpoints/handlers
vendor/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager
vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters
vendor/k8s.io/apiserver/pkg/endpoints/request
vendor/k8s.io/apiserver/pkg/registry/generic/registry
vendor/k8s.io/apiserver/pkg/registry/generic/rest
vendor/k8s.io/apiserver/pkg/registry/rest/resttest
vendor/k8s.io/apiserver/pkg/server
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates
vendor/k8s.io/apiserver/pkg/server/filters
vendor/k8s.io/apiserver/pkg/server/httplog
vendor/k8s.io/apiserver/pkg/server/options
vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig
vendor/k8s.io/apiserver/pkg/server/routes
vendor/k8s.io/apiserver/pkg/server/storage
vendor/k8s.io/apiserver/pkg/storage
vendor/k8s.io/apiserver/pkg/storage/cacher
vendor/k8s.io/apiserver/pkg/storage/etcd3
vendor/k8s.io/apiserver/pkg/storage/tests
vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope
vendor/k8s.io/apiserver/pkg/util/webhook
vendor/k8s.io/apiserver/pkg/util/wsstream
vendor/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc
vendor/k8s.io/apiserver/plugin/pkg/authenticator/token/webhook
vendor/k8s.io/apiserver/plugin/pkg/authorizer/webhook
vendor/k8s.io/client-go/rest
vendor/k8s.io/client-go/rest/watch
vendor/k8s.io/client-go/transport

@tiloso does hack/.staticcheck_failures hase been updated? Currently looking for contributions!

Hey @jpmartins201, currently, there’s no pending pull request for vendor/k8s.io/apiserver/pkg/server/routes. Feel free to tackle it.

Hey @sanya301 and @mourya007! At the moment, all remaining failures are covered by PRs.

[lots of PRs going on, unassigning it so people searching for available issues will find this / it’s not accurate to assign one person 😅 ]

should I do that in staging/

thanks 😃 yeah, I think people tend to not work on issues that are already assigned, but I don’t think anyone one person is going to get PRs in for all of this, there’s lots of room for many to work on this 😃