kubectl: "kubectl rollout history --revision=n" produces wrong/inconsistent output with "-o yaml"?

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kubectl rollout


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"201
9-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"201
8-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GCP n1-highcpu-2 (2 vCPUs, 1.8 GB memory)
  • OS (e.g. from /etc/os-release): Ubuntu 16.04.6 LTS (Xenial Xerus)
  • Kernel (e.g. uname -a): Linux node0 4.15.0-1027-gcp #28~16.04.1-Ubuntu SMP Fri Jan 18 10:10:51 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2
019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
  • Others:

What happened: I was learning the basic of rolling update on a DaemonSet. The template is very simple …

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: ds-one
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        system: DaemonSetOne
    spec:
      containers:
      - name: nginx
        image: nginx:1.9.1
        ports:
        - containerPort: 80

After I used kubectl set image ds ds-one nginx=nginx:1.12.1-alpine to flip the image between nginx:1.9.1 and 1.12.1-alpine back and forth a few times (and deleted the pods to get them updated), I run kubectl rollout history daemonset ds-one to check the rollout history …

daemonset.extensions/ds-one 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>

Then I use kubectl rollout history daemonset ds-one --revision=3 and ... --revision=4 to check the details of each revision.

daemonset.extensions/ds-one with revision #3
Pod Template:
  Labels:       app=nginx
        system=DaemonSetOne
  Containers:
   nginx:
    Image:      nginx:1.9.1
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>
daemonset.extensions/ds-one with revision #4
Pod Template:
  Labels:       app=nginx
        system=DaemonSetOne
  Containers:
   nginx:
    Image:      nginx:1.12.1-alpine
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>

However, when I repeated the same two commands with extra “-o yaml”, I now got the exact same results which say - image: nginx:1.12.1-alpine (the latest revision) regardless which revision I specified in the command.

What you expected to happen: The help says -o only applies to the output format while --revision shows the details. So using -o together with --revision should not change the details produced by --revision option I reckon?

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 12
  • Comments: 41 (12 by maintainers)

Most upvoted comments

Issue reopened as it seems (finally) gathering some interest from other people.

If it is still an issue we can keep it open. Make sure you 👍 the issue as that helps bring attention to it.

It should at least get triaged. I’ll see if this can be added to the list for the next bug review

/reopen

/reopen

Man, I don’t know what to do with this ticket tbh. Every time it got some interest immediately after the bot auto-closed this. The bot says “The Kubernetes project currently lacks enough active contributors”, so I reckon it means no chance to get it solved unless one would like to contribute. I’m not a Golang dev and I don’t understand Kubernetes internal unfortunately.