origin: Kubernetes issues: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

I am following the https://github.com/redhat-iot/summit2017/ URL to setup the redhat IoT solution for fleet management and getting the below errorstar as the pods have not started are in ERROR state. I have installed oc in Ubuntu 16.10.

Current Result
NAME READY STATUS RESTARTS AGE
dashboard-1-build 0/1 Error 0 19m
datastore-1-deploy 0/1 Error 0 4m
datastore-proxy-1-build 0/1 Error 0 19m
elasticsearch-1-deploy 0/1 Error 0 19m
kapua-api-1-deploy 0/1 Error 0 19m
kapua-broker-1-deploy 0/1 Error 0 19m
kapua-console-1-deploy 0/1 Error 0 19m
simulator-1-deploy 0/1 Error 0 19m
sql-1-deploy 0/1 Error 0 19m
The command below lists the error
oc status -v
In project Red Hat IoT Demo (redhat-iot) on server https://192.168.225.169:8443

http://dashboard-redhat-iot.192.168.225.169.xip.io to pod port 8080-tcp (svc/dashboard)
dc/dashboard deploys istag/dashboard:latest <-
bc/dashboard source builds https://github.com/redhat-iot/summit2017#master on openshift/nodejs:4
build #1 failed 19 minutes ago
deployment #1 waiting on image or update

svc/datastore-hotrod - 172.30.4.196:11333
dc/datastore deploys openshift/jboss-datagrid65-openshift:1.2
deployment #1 failed 5 minutes ago: image change

http://datastore-proxy-redhat-iot.192.168.225.169.xip.io to pod port 8080-tcp (svc/datastore-proxy)
dc/datastore-proxy deploys istag/datastore-proxy:latest <-
bc/datastore-proxy source builds https://github.com/redhat-iot/summit2017#master on openshift/wildfly:10.1
build #1 failed 19 minutes ago
deployment #1 waiting on image or update

http://search-redhat-iot.192.168.225.169.xip.io to pod port http (svc/elasticsearch)
dc/elasticsearch deploys docker.io/library/elasticsearch:2.4
deployment #1 failed 20 minutes ago: config change

http://api-redhat-iot.192.168.225.169.xip.io to pod port http (svc/kapua-api)
dc/kapua-api deploys docker.io/redhatiot/kapua-api-jetty:2017-04-08
deployment #1 failed 20 minutes ago: config change

http://broker-redhat-iot.192.168.225.169.xip.io to pod port mqtt-websocket-tcp (svc/kapua-broker)
dc/kapua-broker deploys docker.io/redhatiot/kapua-broker:2017-04-08
deployment #1 failed 20 minutes ago: config change

http://console-redhat-iot.192.168.225.169.xip.io to pod port http (svc/kapua-console)
dc/kapua-console deploys docker.io/redhatiot/kapua-console-jetty:2017-04-08
deployment #1 failed 20 minutes ago: config change

svc/sql - 172.30.56.47 ports 3306, 8181
dc/sql deploys docker.io/redhatiot/kapua-sql:2017-04-08
deployment #1 failed 20 minutes ago: config change

dc/simulator deploys docker.io/redhatiot/kura-simulator:2017-04-08
deployment #1 failed 20 minutes ago: config change

Detailed errors for each pod:

build/dashboard-1 has failed.
try: Inspect the build failure with 'oc logs -f bc/dashboard'
build/datastore-proxy-1 has failed.
try: Inspect the build failure with 'oc logs -f bc/datastore-proxy'
route/api is routing traffic to svc/kapua-api, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/broker is routing traffic to svc/kapua-broker, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/console is routing traffic to svc/kapua-console, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/dashboard is routing traffic to svc/dashboard, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/datastore-proxy is routing traffic to svc/datastore-proxy, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/search is routing traffic to svc/elasticsearch, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
Warnings:

The image trigger for dc/dashboard will have no effect until istag/dashboard:latest is imported or created by a build.
The image trigger for dc/datastore-proxy will have no effect until istag/datastore-proxy:latest is imported or created by a build.
Info:

pod/dashboard-1-build has no liveness probe to verify pods are still running.
try: oc set probe pod/dashboard-1-build --liveness ...
pod/datastore-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/datastore-1-deploy --liveness ...
pod/datastore-proxy-1-build has no liveness probe to verify pods are still running.
try: oc set probe pod/datastore-proxy-1-build --liveness ...
pod/elasticsearch-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/elasticsearch-1-deploy --liveness ...
pod/kapua-api-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/kapua-api-1-deploy --liveness ...
pod/kapua-broker-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/kapua-broker-1-deploy --liveness ...
pod/kapua-console-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/kapua-console-1-deploy --liveness ...
pod/simulator-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/simulator-1-deploy --liveness ...
pod/sql-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/sql-1-deploy --liveness ...
dc/datastore has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/datastore --readiness ...
dc/datastore has no liveness probe to verify pods are still running.
try: oc set probe dc/datastore --liveness ...
dc/elasticsearch has no liveness probe to verify pods are still running.
try: oc set probe dc/elasticsearch --liveness ...
dc/kapua-api has no liveness probe to verify pods are still running.
try: oc set probe dc/kapua-api --liveness ...
dc/kapua-broker has no liveness probe to verify pods are still running.
try: oc set probe dc/kapua-broker --liveness ...
dc/kapua-console has no liveness probe to verify pods are still running.
try: oc set probe dc/kapua-console --liveness ...
dc/simulator has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/simulator --readiness ...
dc/simulator has no liveness probe to verify pods are still running.
try: oc set probe dc/simulator --liveness ...
dc/sql has no liveness probe to verify pods are still running.
try: oc set probe dc/sql --liveness ...
View details with 'oc describe /' or list everything with 'oc get all'.

pod specific Errors:

oc logs -f bc/dashboard
error: cannot connect to the server: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
ganesh@ganesh-Lenovo-ideapad-100-14IBD:~/summit2017$ oc logs -f bc/datastore-proxy
error: cannot connect to the server: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
oc adm router -o yaml
error: router could not be created; service account "router" is not allowed to access the host network on nodes, grant access with oadm policy add-scc-to-user hostnetwork -z router
apiVersion: v1
items:

apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: router
apiVersion: v1
groupNames: null
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: router-router-role
roleRef:
kind: ClusterRole
name: system:router
subjects:
kind: ServiceAccount
name: router
namespace: redhat-iot
userNames:
system:serviceaccount:redhat-iot:router
apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
labels:
router: router
name: router
spec:
replicas: 1
selector:
router: router
strategy:
resources: {}
rollingParams:
maxSurge: 0
maxUnavailable: 25%
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
router: router
spec:
containers:
- env:
- name: DEFAULT_CERTIFICATE_DIR
value: /etc/pki/tls/private
- name: ROUTER_EXTERNAL_HOST_HOSTNAME
- name: ROUTER_EXTERNAL_HOST_HTTPS_VSERVER
- name: ROUTER_EXTERNAL_HOST_HTTP_VSERVER
- name: ROUTER_EXTERNAL_HOST_INSECURE
value: "false"
- name: ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS
- name: ROUTER_EXTERNAL_HOST_PARTITION_PATH
- name: ROUTER_EXTERNAL_HOST_PASSWORD
- name: ROUTER_EXTERNAL_HOST_PRIVKEY
value: /etc/secret-volume/router.pem
- name: ROUTER_EXTERNAL_HOST_USERNAME
- name: ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR
- name: ROUTER_SERVICE_HTTPS_PORT
value: "443"
- name: ROUTER_SERVICE_HTTP_PORT
value: "80"
- name: ROUTER_SERVICE_NAME
value: router
- name: ROUTER_SERVICE_NAMESPACE
value: redhat-iot
- name: ROUTER_SUBDOMAIN
- name: STATS_PASSWORD
value: PjMPJuiol6
- name: STATS_PORT
value: "1936"
- name: STATS_USERNAME
value: admin
image: openshift/origin-haproxy-router:v1.5.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
host: localhost
path: /healthz
port: 1936
initialDelaySeconds: 10
name: router
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1936
name: stats
protocol: TCP
readinessProbe:
httpGet:
host: localhost
path: /healthz
port: 1936
initialDelaySeconds: 10
resources:
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /etc/pki/tls/private
name: server-certificate
readOnly: true
hostNetwork: true
securityContext: {}
serviceAccount: router
serviceAccountName: router
volumes:
- name: server-certificate
secret:
secretName: router-certs
test: false
triggers:
type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.openshift.io/serving-cert-secret-name: router-certs
creationTimestamp: null
labels:
router: router
name: router
spec:
ports:
name: 80-tcp
port: 80
targetPort: 80
name: 443-tcp
port: 443
targetPort: 443
name: 1936-tcp
port: 1936
protocol: TCP
targetPort: 1936
selector:
router: router
status:
loadBalancer: {}
kind: List
metadata: {}
##### Version
[provide output of the `openshift version` or `oc version` command]
oc version
oc v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

docker version
Client:
 Version:      17.03.1-ce
 API version:  1.24 (downgraded from 1.27)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:17:43 2017
 OS/Arch:      linux/amd64
Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)
Expected Result

To run the application correctly

Additional Information
oc adm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/home/ganesh/.kube/config'
[Note] Could not configure a client, so client diagnostics are limited to testing configuration and connection
[Note] Could not configure a client with cluster-admin permissions for the current server, so cluster diagnostics will be skipped

[Note] Running diagnostic: ConfigContexts[redhat-iot/192-168-225-169:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0015 from diagnostic ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
       The current client config context is 'redhat-iot/192-168-225-169:8443/system:admin':
       The server URL is 'https://192.168.225.169:8443'
       The user authentication is 'system:admin/192-168-225-169:8443'
       The current project is 'redhat-iot'
       (*url.Error) Get https://192.168.225.169:8443/api: dial tcp 192.168.225.169:8443: getsockopt: connection refused
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.
       
[Note] Running diagnostic: ConfigContexts[/192-168-225-169:8443/developer]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0015 from diagnostic ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
       For client config context '/192-168-225-169:8443/developer':
       The server URL is 'https://192.168.225.169:8443'
       The user authentication is 'developer/192-168-225-169:8443'
       The current project is 'default'
       (*url.Error) Get https://192.168.225.169:8443/api: dial tcp 192.168.225.169:8443: getsockopt: connection refused
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 30 (11 by maintainers)

Most upvoted comments

#若忘记修改可能会报错:No API token found for service account “default” vi /etc/kubernetes/controller-manager
update the configuration of controller manager:
KUBE_CONTROLLER_MANAGER_ARGS=“–service-account-private-key-file=/var/run/kubernetes/apiserver.key --root-ca-file=/var/run/kubernetes/apiserver.crt”

and,You need to config admission-control flag on apiserver. Sometihng like --admission-control=ServiceAccount.

Sorry, Fedora 25 here. I’ll try to get latest ubuntu on a vm and look into the problem.