kubernetes: kubectl 1.5.1 logs no longer prints list of containers
kubectl logs would print a list of containers in 1.4.4 (and earlier), in 1.5.1 it now prints an error (Error from server (BadRequest): the server rejected our request for an unknown reason (get pods kube-dns-v20-3531996453-0bfz1)
)
Thanks @stonith for finding - copied from https://github.com/kubernetes/kops/issues/1153
> kubectl-1.5.1 version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
> kubectl-1.5.1 logs -n kube-system kube-dns-v20-3531996453-0bfz1
Error from server (BadRequest): the server rejected our request for an unknown reason (get pods kube-dns-v20-3531996453-0bfz1)
(kubectl-1.5.1 logs -n kube-system kube-dns-v20-3531996453-0bfz1 kubedns
works)
Works with 1.4.4 kubectl on the same cluster:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
> kubectl logs -n kube-system kube-dns-v20-3531996453-0bfz1
Error from server: a container name must be specified for pod kube-dns-v20-3531996453-0bfz1, choose one of: [kubedns dnsmasq healthz]
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 17
- Comments: 43 (24 by maintainers)
Commits related to this issue
- Merge pull request #39831 from jessfraz/fix-38774 Automatic merge from submit-queue (batch tested with PRs 39772, 39831, 39481, 40167, 40149) Check if error is Status in result.Stream() Fix #38774 ... — committed to kubernetes/kubernetes by deleted user 7 years ago
- build fedora-atomic-k8s-1.5.1-fix-log image based on fedoar-atomic-k8s-1.5.1 building in magnum, cherry pick the patch, fix https://github.com/kubernetes/kubernetes/issues/38774. — committed to huzhengchuan/custom-fedora-atomic-builds by huzhengchuan 7 years ago
- Bump to k8s v1.5.3 to fix the regression in "kubectl logs" not printing the list of containers. See: https://github.com/kubernetes/kubernetes/issues/38774 — committed to gravitational/planet by a-palchikov 7 years ago
It’s open source software, you are more than welcome to come and help things happen at a speed which you would find more appealing.
On Tue, Feb 7, 2017 at 10:53 AM Alessandro De Maria < notifications@github.com> wrote:
I honestly don’t understand how a bug so critical can take a month to be resolved or released… Disappointed.
+1
Hi, just FYI:
v1.5.2
has the same issueI will cherry-pick it onto 1.5
On Thu, Jan 19, 2017, 17:57 Kubernetes Submit Queue < notifications@github.com> wrote:
there is a 1.5 release scheduled on the 10th, just hang tight 😃
On Mon, Feb 6, 2017 at 8:53 PM John Busch notifications@github.com wrote:
@JayBusch, @Dmitry1987 I had the same issue and solved it.
In my case, the problem is caused by that the worker node name is not registered in DNS, and cannot be accessed from the machine which runs kubectl. kubectl logs try to access https://[worker node name]:10250/containerLogs/kube-system/kube-dns-6vzck/kubedns to get nodes, if the worker node name is not resolvable, kubectl returns InternalError. The problem is solved by either change worker node names to ip address, or add worker node to DNS.
Because this seems to be popping up on related Google searches. If you are having this problem and it says
Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding
(emphasis on the InternalError bit) and you have a Kube in GKE, you might need to check your project sshKeys, as they might be full (32kb max).https://cloud.google.com/sdk/gcloud/reference/compute/project-info/describe
Other symptoms are the inability to kubectl exec or even ssh to GCE instances.
@alledm @Dmitry1987 you shouldn’t have to do
docker logs
via SSH (or equivalent). You always have to specify the container name tokubectl logs
when there is more than one container in a pod; the bug is simply that we don’t print the handy list of containers. I agree that this is an annoying regression, but I don’t think it’s critical (?). You can still get the list of containers viakubectl describe pod <name>
I believe (and I’m sure someone will post a clever jsonpath one-liner 😉 ).It sounds like you’re talking about a different bug? Am I misunderstanding?