kubernetes: Intermittent error when doing kubectl apply -f for a HorizontalPodAutoscaler: 'unable to find api field in struct HorizontalPodAutoscalerSpec for the json field "scaleTargetRef"'
What keywords did you search in Kubernetes issues before filing this one?
- unable to find api field in struct HorizontalPodAutoscalerSpec for the json field “scaleTargetRef”
- HorizontalPodAutoscaler
- scaleTargetRef
Also searched google/stackoverflow
Is this a BUG REPORT or FEATURE REQUEST? Bug report
Kubernetes version (use kubectl version):
v1.4.0
Environment:
- GKE with kubernetes v1.4.0
What happened:
Do kubectl apply -f my-service.yaml
Intermittently, since upgrading to 1.4.0 I have been getting
error: error when applying patch:
to:
&{0xc8203546c0 0xc8202e3340 default my-service-horizpodautoscaler /my-service/build/k8s/my-service.yaml &HorizontalPodAutoscaler{ObjectMeta:k8s_io_kubernetes_pkg_api_v1.ObjectMeta{Name:my-service-horizpodautoscaler,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: ,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:HorizontalPodAutoscalerSpec{ScaleTargetRef:CrossVersionObjectReference{Kind:Deployment,Name:my-service-frontend,APIVersion:extensions/v1beta1,},MinReplicas:*1,MaxReplicas:3,TargetCPUUtilizationPercentage:nil,},Status:HorizontalPodAutoscalerStatus{ObservedGeneration:nil,LastScaleTime:<nil>,CurrentReplicas:0,DesiredReplicas:0,CurrentCPUUtilizationPercentage:nil,},} &TypeMeta{Kind:,APIVersion:,} 1537793 false}
for: "/my-service/build/k8s/my-service.yaml": error when creating patch with:
original:
{"kind":"HorizontalPodAutoscaler","apiVersion":"autoscaling/v1","metadata":{"name":"my-service-horizpodautoscaler","creationTimestamp":null},"spec":{"scaleTargetRef":{"kind":"Deployment","name":"my-service-frontend","apiVersion":"extensions/v1beta1"},"minReplicas":1,"maxReplicas":3},"status":{"currentReplicas":0,"desiredReplicas":0}}
modified:
{"kind":"HorizontalPodAutoscaler","apiVersion":"autoscaling/v1","metadata":{"name":"my-service-horizpodautoscaler","creationTimestamp":null,"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"HorizontalPodAutoscaler\",\"apiVersion\":\"autoscaling/v1\",\"metadata\":{\"name\":\"my-service-horizpodautoscaler\",\"creationTimestamp\":null},\"spec\":{\"scaleTargetRef\":{\"kind\":\"Deployment\",\"name\":\"my-service-frontend\",\"apiVersion\":\"extensions/v1beta1\"},\"minReplicas\":1,\"maxReplicas\":3},\"status\":{\"currentReplicas\":0,\"desiredReplicas\":0}}"}},"spec":{"scaleTargetRef":{"kind":"Deployment","name":"my-service-frontend","apiVersion":"extensions/v1beta1"},"minReplicas":1,"maxReplicas":3},"status":{"currentReplicas":0,"desiredReplicas":0}}
current:
{"kind":"HorizontalPodAutoscaler","apiVersion":"extensions/v1beta1","metadata":{"name":"my-service-horizpodautoscaler","namespace":"default","selfLink":"/apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/my-service-horizpodautoscaler","uid":"REDACTED-REDA-REDA-REDA-REDACTED","resourceVersion":"1537793","creationTimestamp":"2016-09-28T15:51:27Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"HorizontalPodAutoscaler\",\"apiVersion\":\"autoscaling/v1\",\"metadata\":{\"name\":\"my-service-horizpodautoscaler\",\"creationTimestamp\":null},\"spec\":{\"scaleTargetRef\":{\"kind\":\"Deployment\",\"name\":\"my-service-frontend\",\"apiVersion\":\"extensions/v1beta1\"},\"minReplicas\":1,\"maxReplicas\":3},\"status\":{\"currentReplicas\":0,\"desiredReplicas\":0}}"}},"spec":{"scaleRef":{"kind":"Deployment","name":"my-service-frontend","apiVersion":"extensions/v1beta1","subresource":"scale"},"minReplicas":1,"maxReplicas":3,"cpuUtilization":{"targetPercentage":80}},"status":{"lastScaleTime":"2016-10-08T23:19:12Z","currentReplicas":1,"desiredReplicas":1,"currentCPUUtilizationPercentage":0}}
for: "/my-service/build/k8s/my-service.yaml": unable to find api field in struct HorizontalPodAutoscalerSpec for the json field "scaleTargetRef"
As you can see, it seems that the “current” metadata specifies “apiVersion” as extensions/v1beta1, that might be correct, but it seems suspicious to me…
I have the feeling it works every second time I do apply, but hard to say from my small sample size.
What you expected to happen: The apply to work every time.
How to reproduce it
I used this yaml file (cleaned)
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
tier: frontend
spec:
type: NodePort
ports:
- name: http8080
protocol: TCP
port: 8080
targetPort: 8080
selector:
app: my-service
tier: frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-service-frontend
spec:
replicas: 1
template:
metadata:
labels:
app: my-service
tier: frontend
spec:
containers:
- name: my-service
image: eu.gcr.io/my-project/my-service:v0.0.0
env:
- name: MY_ENVIRONMENT
value: dev
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 800Mi
---
apiVersion: autoscaling/v1 #I've tried with extensions/v1beta1 and "scaleRef" below, but that doesn't work properly either (similar error message)
kind: HorizontalPodAutoscaler
metadata:
name: my-service-horizpodautoscaler
spec:
scaleTargetRef:
kind: Deployment
name: my-service-frontend
apiVersion: extensions/v1beta1 #I've tried with and without this, makes no difference
minReplicas: 1
maxReplicas: 3
Anything else do we need to know:
GET on clusters using https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters/get reveals:
[...]
"addonsConfig": {
"httpLoadBalancing": {
},
"horizontalPodAutoscaling": {
}
},
[...]
"initialNodeCount": 1,
"autoscaling": {
"enabled": true,
"minNodeCount": 1,
"maxNodeCount": 3
},
[...]
"initialClusterVersion": "1.3.6",
"currentMasterVersion": "1.4.0",
"currentNodeVersion": "1.4.0",
To solve this I have temporarily moved the HPA section out of the file into a separate file, but I prefer having the complete configuration in one place.
Am I doing something wrong?
Thanks!
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 4
- Comments: 32 (24 by maintainers)
Commits related to this issue
- Merge pull request #38982 from ymqytw/make_apply_use_correct_versioned_obj Automatic merge from submit-queue make apply use the correct versioned obj Cherrypick [part of the changes](https://github... — committed to kubernetes/kubernetes by deleted user 8 years ago
- Merge pull request #40260 from liggitt/kubectl-tpr Automatic merge from submit-queue (batch tested with PRs 39223, 40260, 40082, 40389) make kubectl generic commands work with unstructured objects ... — committed to kubernetes/kubernetes by deleted user 7 years ago
Yes, this will be in 1.5.2
On Wed, Jan 4, 2017 at 10:47 AM, Mengqi Yu notifications@github.com wrote:
I have this issue with a 1.5.2 cluster with a 1.5.4 client, anyone else seeing it?
@soltysh any updates on this?