kops: kube-dns and kubernetes-dashboard crashing after deploying k8s dashboard on ver 1.8.0-beta.1

Thanks for submitting an issue!

-------------BUG REPORT --------------------

  1. Fill in as much of the template below as you can.

  2. What kops version are you running? use kops version Version 1.8.0-beta.1 (git-9b71713)

  3. What Kubernetes version are you running? use kubectl version Client Version: version.Info{Major:“1”, Minor:“8”, GitVersion:“v1.8.2”, GitCommit:“bdaeafa71f6c7c04636251031f93464384d54963”, GitTreeState:“clean”, BuildDate:“2017-10-24T21:07:53Z”, GoVersion:“go1.9.1”, Compiler:“gc”, Platform:“darwin/amd64”} Server Version: version.Info{Major:“1”, Minor:“8”, GitVersion:“v1.8.0”, GitCommit:“6e937839ac04a38cac63e6a7a306c5d035fe7b0a”, GitTreeState:“clean”, BuildDate:“2017-09-28T22:46:41Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}

  4. What cloud provider are you using? AWS

  5. What commands did you execute (Please provide cluster manifest kops get --name my.example.com, if available) and what happened after commands executed? a. #!/bin/bash

export KOPS_STATE_STORE=s3://XXXXXXXX export CLUSTER_NAME=test-k8s.gett.io # Edit with your cluster name export VPC_ID=vpc-XXXXXXXX # Edit with your VPC id export NETWORK_CIDR=XXXXXXXXX # Edit with the cidr of your VPC

kops create cluster
–zones=eu-west-1a,eu-west-1b,eu-west-1c
–master-zones=eu-west-1a,eu-west-1b,eu-west-1c
–name=${CLUSTER_NAME}
–vpc=${VPC_ID}
–cloud=aws
–cloud-labels “Environment="test",Name="${CLUSTER_NAME}",Role="node",Provisioner="kops"”
–ssh-public-key=/Users/roee/.ssh/gett-20150327.pub
–node-count=2
–networking=flannel
–node-size=m4.large
–master-size=m4.large
–dns-zone=gett.io
–image=ami-785db401
–topology private
b. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml c. Before dash board creation things are well functioning (means cluster is ready), after deploying things stat crashing:

roee@Roees-MacBook-Pro  ~  kops validate cluster --name test-k8s.gett.io Validating cluster test-k8s.gett.io

INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-1a Master m4.large 1 1 eu-west-1a master-eu-west-1b Master m4.large 1 1 eu-west-1b master-eu-west-1c Master m4.large 1 1 eu-west-1c nodes Node m4.large 2 2 eu-west-1a,eu-west-1b,eu-west-1c

NODE STATUS NAME ROLE READY ip-172-23-77-167.eu-west-1.compute.internal master True ip-172-23-78-218.eu-west-1.compute.internal master True ip-172-23-79-214.eu-west-1.compute.internal node True ip-172-23-80-34.eu-west-1.compute.internal node True ip-172-23-81-65.eu-west-1.compute.internal master True

Pod Failures in kube-system NAME cluster-autoscaler-7f877d6965-5nnbx kube-dns-7f56f9f8c7-prdhs kubernetes-dashboard-747c4f7cf-4k8vc

Validation Failed Ready Master(s) 3 out of 3. Ready Node(s) 2 out of 2.

(output from : kops get --name my.example.com) ✘ roee@Roees-MacBook-Pro  ~/Temp/heapster   master  kops get --name test-k8s.gett.io Cluster NAME CLOUD ZONES test-k8s.gett.io aws eu-west-1a,eu-west-1b,eu-west-1c

Instance Groups NAME ROLE MACHINETYPE MIN MAX ZONES master-eu-west-1a Master m4.large 1 1 eu-west-1a master-eu-west-1b Master m4.large 1 1 eu-west-1b master-eu-west-1c Master m4.large 1 1 eu-west-1c nodes Node m4.large 2 2 eu-west-1a,eu-west-1b,eu-west-1c

  1. How can we to reproduce it (as minimally and precisely as possible): Install cluster using kops 1.8.0-beta.1 Deploy dashboard using : kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
  2. Anything else do we need to know: kops config file :
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2017-10-23T12:47:58Z
  name: test-k8s.gett.io
spec:
  api:
    loadBalancer:
      type: Internal
  authorization:
    alwaysAllow: {}
  channel: stable
  cloudLabels:
    Environment: test
    Name: test-k8s.gett.io
    Provisioner: kops
    Role: node
  cloudProvider: aws
  configBase: s3://xxxx
  dnsZone: gett.io
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    - instanceGroup: master-eu-west-1b
      name: b
    - instanceGroup: master-eu-west-1c
      name: c
    name: main
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    - instanceGroup: master-eu-west-1b
      name: b
    - instanceGroup: master-eu-west-1c
      name: c
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    authorizationMode: RBAC
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.8.0
  masterInternalName: api.internal.test-k8s.gett.io
  masterPublicName: api.test-k8s.gett.io
  networkCIDR: 172.23.0.0/16
  networkID: vpc-ed80478a
  networking:
    flannel:
      backend: vxlan
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.23.76.0/23
    egress: nat-987654321
    id: subnet-789c061f
    name: eu-west-1a
    type: Private
    zone: eu-west-1a
  - cidr: 172.23.78.0/23
    egress: nat-987654321
    id: subnet-929efcdb
    name: eu-west-1b
    type: Private
    zone: eu-west-1b
  - cidr: 172.23.80.0/23
    egress: nat-987654321
    id: subnet-59f05602
    name: eu-west-1c
    type: Private
    zone: eu-west-1c
  topology:
    dns:
      type: Public
    masters: private
    nodes: private

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 1
  • Comments: 18 (13 by maintainers)

Most upvoted comments

Closing … yay