tidb-operator: Docker for Mac DinD deploy failed

followed instruction local-dind-tutorial.md

root@kube-master:/# kubectl get pods --all-namespaces 
NAMESPACE     NAME                                      READY     STATUS             RESTARTS   AGE
kube-system   etcd-kube-master                          1/1       Running            0          25m
kube-system   kube-apiserver-kube-master                1/1       Running            0          26m
kube-system   kube-controller-manager-kube-master       1/1       Running            0          25m
kube-system   kube-dns-64d6979467-8sqrd                 3/3       Running            12         26m
kube-system   kube-flannel-ds-amd64-cjnnz               1/1       Running            0          26m
kube-system   kube-flannel-ds-amd64-fhtv9               1/1       Running            0          26m
kube-system   kube-flannel-ds-amd64-rbq8n               1/1       Running            0          26m
kube-system   kube-flannel-ds-amd64-vczqs               1/1       Running            0          26m
kube-system   kube-proxy-54hph                          1/1       Running            0          26m
kube-system   kube-proxy-g95qx                          1/1       Running            0          26m
kube-system   kube-proxy-ks7gq                          1/1       Running            0          26m
kube-system   kube-proxy-njl5h                          1/1       Running            0          26m
kube-system   kube-scheduler-kube-master                1/1       Running            0          25m
kube-system   kubernetes-dashboard-68ddc89549-6nclg     1/1       Running            0          26m
kube-system   local-volume-provisioner-9rfvm            1/1       Running            0          26m
kube-system   local-volume-provisioner-lhkbh            1/1       Running            0          26m
kube-system   local-volume-provisioner-qs6n7            1/1       Running            0          26m
kube-system   registry-proxy-6wth2                      1/1       Running            0          26m
kube-system   registry-proxy-bzp45                      1/1       Running            0          26m
kube-system   registry-proxy-cxcpg                      1/1       Running            0          26m
kube-system   registry-proxy-z4l6c                      1/1       Running            0          26m
kube-system   tiller-deploy-df4fdf55d-lhk9p             1/1       Running            0          24m
tidb-admin    tidb-controller-manager-bcc66f746-t4tsq   1/1       Running            0          22m
tidb-admin    tidb-scheduler-5b85b688c6-wrvbg           2/2       Running            0          22m
tidb          demo-monitor-5bc85fdb7f-n4vj7             2/2       Running            0          20m
tidb          demo-monitor-configurator-sn5hb           0/1       Completed          1          20m
tidb          demo-pd-0                                 1/1       Running            0          20m
tidb          demo-pd-1                                 1/1       Running            0          20m
tidb          demo-pd-2                                 0/1       Pending            0          20m
tidb          demo-tidb-0                               0/1       CrashLoopBackOff   7          17m
tidb          demo-tidb-1                               0/1       Running            8          17m
tidb          demo-tikv-0                               2/2       Running            4          20m
tidb          demo-tikv-1                               2/2       Running            4          20m
tidb          demo-tikv-2                               0/2       Pending            0          20m

kubectl describe pod demo-pd-2 -n tidb

Name:           demo-pd-2
Namespace:      tidb
Node:           <none>
Labels:         app.kubernetes.io/component=pd
                app.kubernetes.io/instance=demo
                app.kubernetes.io/managed-by=tidb-operator
                app.kubernetes.io/name=tidb-cluster
                controller-revision-hash=demo-pd-579d4c4bdf
                statefulset.kubernetes.io/pod-name=demo-pd-2
                tidb.pingcap.com/cluster-id=6621035670618381862
Annotations:    pingcap.com/last-applied-configuration={"volumes":[{"name":"annotations","downwardAPI":{"items":[{"path":"annotations","fieldRef":{"fieldPath":"metadata.annotations"}}]}},{"name":"config","configMap":...
                prometheus.io/path=/metrics
                prometheus.io/port=2379
                prometheus.io/scrape=true
Status:         Pending
IP:             
Controlled By:  StatefulSet/demo-pd
Containers:
  pd:
    Image:       pingcap/pd:v2.0.7
    Ports:       2380/TCP, 2379/TCP
    Host Ports:  0/TCP, 0/TCP
    Command:
      /bin/sh
      /usr/local/bin/pd_start_script.sh
    Environment:
      NAMESPACE:          tidb (v1:metadata.namespace)
      PEER_SERVICE_NAME:  demo-pd-peer
      SERVICE_NAME:       demo-pd
      SET_NAME:           demo-pd
      TZ:                 UTC
    Mounts:
      /etc/pd from config (ro)
      /etc/podinfo from annotations (ro)
      /usr/local/bin from startup-script (ro)
      /var/lib/pd from pd (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qkthr (ro)
Volumes:
  pd:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pd-demo-pd-2
    ReadOnly:   false
  annotations:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.annotations -> annotations
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      demo-pd
    Optional:  false
  startup-script:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      demo-pd
    Optional:  false
  default-token-qkthr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qkthr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From            Message
  ----     ------            ----               ----            -------
  Warning  FailedScheduling  23m                tidb-scheduler  pod has unbound PersistentVolumeClaims (repeated 3 times)
  Warning  FailedScheduling  3m (x71 over 23m)  tidb-scheduler  Failed filter with extender at URL http://127.0.0.1:10262/scheduler, code 500

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 41 (41 by maintainers)

Commits related to this issue

Most upvoted comments

@kirinse We’re sorry about these issues, these are related to upstream programs so they’re a bit slower to get fixed in tidb-operator. But there are some workarounds you can try now.

For the scheduling issue, you can delete and recreate the cluster. If you’re lucky, all the pods can be scheduled correctly. If not, then there is another workaround: set schedulerName to default in charts/tidb-cluster/values.yaml. This disables HA scheduling, multiple PD pods or TiKV pods may be scheduled to the same node, but I think it’s ok for DinD test.

For the PD pods bootstrap error, you can try the latest PD Docker image.