origin: unable to periodically refresh dnsmasq status: The name uk.org.thekelleys.dnsmasq was not provided by any .service files

Version

Linux: CentOS Linux release 7.4.1708 (Core)

openshift version openshift v3.7.0+7ed6862 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8

Steps To Reproduce
  1. openshit start
Current Result
W1213 16:54:40.075340   30603 start_master.go:290] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue.
W1213 16:54:40.075439   30603 start_master.go:290] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue.
E1213 16:54:40.150360   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:40.949447   30603 start_master.go:522] Starting master on 0.0.0.0:8443 (v3.7.0+7ed6862)
I1213 16:54:40.949472   30603 start_master.go:523] Public master address is https://10.1.88.33:8443
I1213 16:54:40.949488   30603 start_master.go:530] Using images from "openshift/origin-<component>:v3.7.0"
2017-12-13 16:54:40.949598 I | embed: peerTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2017-12-13 16:54:40.950366 I | embed: listening for peers on https://0.0.0.0:7001
2017-12-13 16:54:40.950425 I | embed: listening for client requests on 0.0.0.0:4001
2017-12-13 16:54:40.960471 I | etcdserver: name = openshift.local
2017-12-13 16:54:40.960494 I | etcdserver: data dir = openshift.local.etcd
2017-12-13 16:54:40.960508 I | etcdserver: member dir = openshift.local.etcd/member
2017-12-13 16:54:40.960519 I | etcdserver: heartbeat = 100ms
2017-12-13 16:54:40.960534 I | etcdserver: election = 1000ms
2017-12-13 16:54:40.960544 I | etcdserver: snapshot count = 100000
2017-12-13 16:54:40.960567 I | etcdserver: advertise client URLs = https://10.1.88.33:4001
2017-12-13 16:54:40.980262 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.1.88.33:4001: getsockopt: connection refused"; Reconnecting to {10.1.88.33:4001 <nil>}
2017-12-13 16:54:40.987442 I | etcdserver: restarting member e9875137dd2285e9 in cluster bcab73cc7ff7b784 at commit index 1623
2017-12-13 16:54:40.987598 I | raft: e9875137dd2285e9 became follower at term 7
2017-12-13 16:54:40.987619 I | raft: newRaft e9875137dd2285e9 [peers: [], term: 7, commit: 1623, applied: 0, lastindex: 1623, lastterm: 7]
2017-12-13 16:54:41.142431 W | auth: simple token is not cryptographically signed
E1213 16:54:41.151117   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
2017-12-13 16:54:41.200640 I | etcdserver: starting server... [version: 3.2.8, cluster version: to_be_decided]
2017-12-13 16:54:41.200686 I | embed: ClientTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2017-12-13 16:54:41.203398 I | etcdserver/membership: added member e9875137dd2285e9 [https://10.1.88.33:7001] to cluster bcab73cc7ff7b784
2017-12-13 16:54:41.203580 N | etcdserver/membership: set the initial cluster version to 3.2
2017-12-13 16:54:41.203653 I | etcdserver/api: enabled capabilities for version 3.2
2017-12-13 16:54:41.252632 I | raft: e9875137dd2285e9 is starting a new election at term 7
2017-12-13 16:54:41.252749 I | raft: e9875137dd2285e9 became candidate at term 8
2017-12-13 16:54:41.252779 I | raft: e9875137dd2285e9 received MsgVoteResp from e9875137dd2285e9 at term 8
2017-12-13 16:54:41.252803 I | raft: e9875137dd2285e9 became leader at term 8
2017-12-13 16:54:41.252820 I | raft: raft.node: e9875137dd2285e9 elected leader e9875137dd2285e9 at term 8
2017-12-13 16:54:41.265640 I | etcdserver: published {Name:openshift.local ClientURLs:[https://10.1.88.33:4001]} to cluster bcab73cc7ff7b784
I1213 16:54:41.265712   30603 run.go:81] Started etcd at 10.1.88.33:4001
2017-12-13 16:54:41.267117 I | embed: ready to serve client requests
2017-12-13 16:54:41.267644 I | embed: serving client requests on [::]:4001
2017-12-13 16:54:41.314519 I | etcdserver/api/v3rpc: Failed to dial 0.0.0.0:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry.
W1213 16:54:41.315723   30603 run_components.go:49] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
W1213 16:54:41.315963   30603 server.go:85] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
I1213 16:54:41.316080   30603 logs.go:41] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I1213 16:54:41.316093   30603 logs.go:41] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
I1213 16:54:41.416939   30603 run_components.go:75] DNS listening at 0.0.0.0:8053
E1213 16:54:42.177290   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:42.298033   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:42.298051   30603 master.go:329] Starting OAuth2 API at /oauth
I1213 16:54:42.303405   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:42.303420   30603 master.go:329] Starting OAuth2 API at /oauth
I1213 16:54:42.306947   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:42.306961   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:42.321081   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:42.321099   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:42.339273   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:42.339294   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:42.383099   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:42.383199   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:42.617058   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:42.617088   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:42.665406   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
W1213 16:54:42.665437   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:42.921556   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:42.921595   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:42.955238   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:42.955263   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:42 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
E1213 16:54:43.214313   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:43.226697   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:43.226712   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:43.268842   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:43.268879   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:43 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:43 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:43.698657   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:43.698687   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:43.828016   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:43.828047   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:43 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:43 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:44.254272   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:44.254298   30603 master.go:329] Starting OAuth2 API at /oauth
E1213 16:54:44.304691   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
W1213 16:54:44.392558   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
W1213 16:54:44.392591   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
[restful] 2017/12/13 16:54:44 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:44 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:44.728186   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:44.728243   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:44.833496   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
W1213 16:54:44.833534   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
[restful] 2017/12/13 16:54:44 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:44 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:45.135426   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:45.135462   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:45.170951   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:45.170976   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:45 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:45 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
E1213 16:54:45.358570   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:45.649836   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:45.649867   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:45.746629   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:45.746666   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:45 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:45 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
E1213 16:54:46.234284   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:46.270252   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:46.270281   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:46.683501   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:46.683538   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:46 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:46 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:47.163274   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:47.163298   30603 master.go:329] Starting OAuth2 API at /oauth
E1213 16:54:47.257302   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
W1213 16:54:47.289229   30603 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1213 16:54:47.289251   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:47 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:47 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
I1213 16:54:47.825350   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:47.825390   30603 master.go:329] Starting OAuth2 API at /oauth
I1213 16:54:47.914723   30603 openshift_apiserver.go:544] Started Origin API at /oapi/v1
W1213 16:54:48.132507   30603 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2017/12/13 16:54:48 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:48 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
E1213 16:54:48.348632   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
E1213 16:54:49.288403   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:49.637598   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:49.637640   30603 master.go:329] Starting OAuth2 API at /oauth
W1213 16:54:49.895071   30603 genericapiserver.go:371] Skipping API autoscaling/v2alpha1 because it has no resources.
W1213 16:54:50.079673   30603 genericapiserver.go:371] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
E1213 16:54:50.444767   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
[restful] 2017/12/13 16:54:50 log.go:33: [restful/swagger] listing is available at https://10.1.88.33:8443/swaggerapi
[restful] 2017/12/13 16:54:50 log.go:33: [restful/swagger] https://10.1.88.33:8443/swaggerui/ is mapped to folder /swagger-ui/
E1213 16:54:51.208098   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
E1213 16:54:52.188350   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:52.507778   30603 master.go:320] Starting Web Console https://10.1.88.33:8443/console/
I1213 16:54:52.507812   30603 master.go:329] Starting OAuth2 API at /oauth
I1213 16:54:52.575672   30603 serve.go:85] Serving securely on 0.0.0.0:8443
I1213 16:54:52.575881   30603 clusterquotamapping.go:160] Starting ClusterQuotaMappingController controller
I1213 16:54:52.575912   30603 available_controller.go:256] Starting AvailableConditionController
I1213 16:54:52.575944   30603 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1213 16:54:52.575975   30603 crd_finalizer.go:248] Starting CRDFinalizer
I1213 16:54:52.577774   30603 openshift_apiserver.go:642] Using default project node label selector: 
I1213 16:54:52.577824   30603 tprregistration_controller.go:147] Starting tpr-autoregister controller
I1213 16:54:52.577840   30603 controller_utils.go:1025] Waiting for caches to sync for tpr-autoregister controller
I1213 16:54:52.579067   30603 apiservice_controller.go:113] Starting APIServiceRegistrationController
I1213 16:54:52.579086   30603 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1213 16:54:52.579981   30603 customresource_discovery_controller.go:152] Starting DiscoveryController
I1213 16:54:52.580010   30603 naming_controller.go:284] Starting NamingConditionController
I1213 16:54:52.581260   30603 autoregister_controller.go:141] Starting autoregister controller
I1213 16:54:52.581271   30603 cache.go:32] Waiting for caches to sync for autoregister controller
I1213 16:54:53.191766   30603 cache.go:39] Caches are synced for autoregister controller
I1213 16:54:53.211757   30603 cache.go:39] Caches are synced for AvailableConditionController controller
I1213 16:54:53.212502   30603 cache.go:39] Caches are synced for APIServiceRegistrationController controller
E1213 16:54:53.315755   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:53.424725   30603 controller_utils.go:1032] Caches are synced for tpr-autoregister controller
W1213 16:54:53.621791   30603 server.go:190] WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
I1213 16:54:53.651724   30603 client.go:72] Connecting to docker on unix:///var/run/docker.sock
I1213 16:54:53.651780   30603 client.go:92] Start docker client with request timeout=2m0s
W1213 16:54:53.791088   30603 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
I1213 16:54:54.083632   30603 trace.go:76] Trace[1740807409]: "List /api/v1/secrets" (started: 2017-12-13 16:54:53.057394249 +0800 CST) (total time: 1.02618037s):
Trace[1740807409]: [1.007189658s] [1.007091863s] Listing from storage done
E1213 16:54:54.246867   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:54.333772   30603 start_node.go:469] Starting node server33 (v3.7.0+7ed6862)
I1213 16:54:54.333856   30603 client.go:72] Connecting to docker on unix:///var/run/docker.sock
I1213 16:54:54.333872   30603 client.go:92] Start docker client with request timeout=2m0s
I1213 16:54:54.420576   30603 node.go:109] Connecting to Docker at unix:///var/run/docker.sock
I1213 16:54:54.779328   30603 feature_gate.go:144] feature gates: map[]
I1213 16:54:54.785198   30603 manager.go:144] cAdvisor running in container: "/user.slice"
I1213 16:54:54.906730   30603 network.go:88] Using iptables Proxier.
W1213 16:54:54.966054   30603 manager.go:152] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
W1213 16:54:54.966261   30603 manager.go:161] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: connection refused
W1213 16:54:55.041125   30603 lease_endpoint_reconciler.go:176] Resetting endpoints for master service "kubernetes" to [10.1.88.33]
W1213 16:54:55.074276   30603 proxier.go:488] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1213 16:54:55.074960   30603 network.go:119] Tearing down userspace rules.
I1213 16:54:55.107468   30603 fs.go:124] Filesystem partitions: map[/dev/vda1:{mountpoint:/boot major:253 minor:1 fsType:xfs blockSize:0} /dev/vda3:{mountpoint:/var/lib/docker/devicemapper major:253 minor:3 fsType:xfs blockSize:0}]
I1213 16:54:55.108950   30603 manager.go:211] Machine: {NumCores:1 CpuFrequency:1999999 MemoryCapacity:3975077888 MachineID:88793d3afd024ec19092e9580025bf89 SystemUUID:AFD60EDD-16FB-42E0-9DC2-B4D9606AC75D BootID:67da09a6-3b3d-456e-85a9-9604c443d71d Filesystems:[{Device:/dev/vda3 DeviceMajor:253 DeviceMinor:3 Capacity:33281273856 Type:vfs Inodes:32517120 HasInodes:true} {Device:/dev/vda1 DeviceMajor:253 DeviceMinor:1 Capacity:520794112 Type:vfs Inodes:512000 HasInodes:true}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:107374182400 Scheduler:none} 253:0:{Name:vda Major:253 Minor:0 Size:37580963840 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:52:54:00:14:cd:d1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:4294529024 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I1213 16:54:55.186891   30603 manager.go:217] Version: {KernelVersion:3.10.0-693.5.2.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:17.11.0-ce DockerAPIVersion:1.34 CadvisorVersion: CadvisorRevision:}
I1213 16:54:55.187682   30603 server.go:546] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
W1213 16:54:55.234123   30603 container_manager_linux.go:218] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
I1213 16:54:55.234277   30603 container_manager_linux.go:246] container manager verified user specified cgroup-root exists: /
I1213 16:54:55.234304   30603 container_manager_linux.go:251] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
I1213 16:54:55.234643   30603 kubelet.go:271] Watching apiserver
E1213 16:54:55.310482   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
W1213 16:54:55.415465   30603 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I1213 16:54:55.415518   30603 kubelet.go:507] Hairpin mode set to "hairpin-veth"
W1213 16:54:55.516982   30603 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
I1213 16:54:55.752258   30603 docker_service.go:210] Docker cri networking managed by kubernetes.io/no-op
I1213 16:54:55.812225   30603 docker_service.go:227] Setting cgroupDriver to systemd
W1213 16:54:55.832909   30603 util_linux.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
I1213 16:54:55.920573   30603 remote_runtime.go:42] Connecting to runtime service /var/run/dockershim.sock
W1213 16:54:55.920611   30603 util_linux.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
W1213 16:54:55.920696   30603 util_linux.go:75] Using "/var/run/dockershim.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/dockershim.sock".
I1213 16:54:55.960430   30603 kuberuntime_manager.go:178] Container runtime docker initialized, version: 17.11.0-ce, apiVersion: 1.34.0
I1213 16:54:55.993674   30603 server.go:869] Started kubelet v1.7.6+a08f5eeb62
I1213 16:54:55.993727   30603 server.go:132] Starting to listen on 0.0.0.0:10250
I1213 16:54:55.994857   30603 server.go:314] Adding debug handlers to kubelet server.
E1213 16:54:56.000262   30603 kubelet.go:1191] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
I1213 16:54:56.001482   30603 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I1213 16:54:56.001510   30603 status_manager.go:141] Starting to sync pod status with apiserver
I1213 16:54:56.001530   30603 kubelet.go:1785] Starting kubelet main sync loop.
I1213 16:54:56.001550   30603 kubelet.go:1796] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
W1213 16:54:56.002458   30603 container_manager_linux.go:747] CPUAccounting not enabled for pid: 30603
W1213 16:54:56.002467   30603 container_manager_linux.go:750] MemoryAccounting not enabled for pid: 30603
E1213 16:54:56.002513   30603 container_manager_linux.go:543] [ContainerManager]: Fail to get rootfs information unable to find data for container /
I1213 16:54:56.002538   30603 volume_manager.go:245] Starting Kubelet Volume Manager
I1213 16:54:56.076447   30603 network.go:226] Started Kubernetes Proxy on 0.0.0.0
I1213 16:54:56.077635   30603 config.go:202] Starting service config controller
I1213 16:54:56.077651   30603 controller_utils.go:1025] Waiting for caches to sync for service config controller
I1213 16:54:56.077676   30603 config.go:102] Starting endpoints config controller
I1213 16:54:56.077693   30603 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
I1213 16:54:56.077740   30603 network.go:52] Starting DNS on 10.1.88.33:53
I1213 16:54:56.090888   30603 logs.go:41] skydns: ready for queries on cluster.local. for tcp://10.1.88.33:53 [rcache 0]
I1213 16:54:56.090903   30603 logs.go:41] skydns: ready for queries on cluster.local. for udp://10.1.88.33:53 [rcache 0]
I1213 16:54:56.107088   30603 kubelet_node_status.go:270] Setting node annotation to enable volume controller attach/detach
E1213 16:54:56.109012   30603 dnsmasq.go:105] unable to periodically refresh dnsmasq status: The name uk.org.thekelleys.dnsmasq was not provided by any .service files
E1213 16:54:56.184381   30603 factory.go:336] devicemapper filesystem stats will not be reported: usage of thin_ls is disabled to preserve iops
I1213 16:54:56.214510   30603 controller_utils.go:1032] Caches are synced for service config controller
I1213 16:54:56.214637   30603 controller_utils.go:1032] Caches are synced for endpoints config controller
I1213 16:54:56.256891   30603 factory.go:351] Registering Docker factory
W1213 16:54:56.256946   30603 manager.go:260] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
W1213 16:54:56.257085   30603 manager.go:271] Registration of the crio container factory failed: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: connection refused
I1213 16:54:56.257105   30603 factory.go:54] Registering systemd factory
I1213 16:54:56.257433   30603 factory.go:86] Registering Raw factory
I1213 16:54:56.257642   30603 manager.go:1139] Started watching for new ooms in manager
I1213 16:54:56.286423   30603 oomparser.go:185] oomparser using systemd
I1213 16:54:56.287418   30603 manager.go:306] Starting recovery of all containers
E1213 16:54:56.311879   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:56.313722   30603 kubelet_node_status.go:82] Attempting to register node server33
I1213 16:54:56.631767   30603 manager.go:311] Recovery completed
E1213 16:54:56.897942   30603 helpers.go:778] Could not find capacity information for resource storage.kubernetes.io/scratch
W1213 16:54:56.898009   30603 helpers.go:789] eviction manager: no observation found for eviction signal allocatableNodeFs.available
I1213 16:54:56.933698   30603 kubelet_node_status.go:133] Node server33 was previously registered
I1213 16:54:56.933723   30603 kubelet_node_status.go:85] Successfully registered node server33
E1213 16:54:57.157837   30603 controllers.go:116] Server isn't healthy yet. Waiting a little while.
I1213 16:54:58.163849   30603 start_master.go:627] Started serviceaccount-token controller
I1213 16:54:58.177657   30603 controllermanager.go:108] Version: v1.7.6+a08f5eeb62
E1213 16:54:58.177759   30603 controllermanager.go:116] unable to register configz: register config "componentconfig" twice
I1213 16:54:58.182344   30603 leaderelection.go:179] attempting to acquire leader lease...
I1213 16:54:58.182603   30603 controller_utils.go:1025] Waiting for caches to sync for tokens controller
I1213 16:54:58.247139   30603 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager
I1213 16:54:58.249613   30603 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"kube-controller-manager", UID:"0090912f-dfb4-11e7-b616-52540014cdd1", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' server33 became leader
I1213 16:54:58.384554   30603 controller_utils.go:1032] Caches are synced for tokens controller
I1213 16:54:58.408642   30603 plugins.go:101] No cloud provider specified.
W1213 16:54:58.408716   30603 controllermanager.go:481] "serviceaccount-token" is disabled
E1213 16:54:58.441671   30603 core.go:68] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
W1213 16:54:58.441704   30603 controllermanager.go:463] Skipping "service"
I1213 16:54:58.500018   30603 start_master.go:690] Started "openshift.io/horizontalpodautoscaling"
I1213 16:54:58.500198   30603 horizontal.go:145] Starting HPA controller
I1213 16:54:58.500252   30603 controller_utils.go:1025] Waiting for caches to sync for HPA controller
I1213 16:54:58.530355   30603 controllermanager.go:466] Started "attachdetach"
I1213 16:54:58.530527   30603 attach_detach_controller.go:242] Starting attach detach controller
I1213 16:54:58.530541   30603 controller_utils.go:1025] Waiting for caches to sync for attach detach controller
I1213 16:54:58.568949   30603 controllermanager.go:466] Started "podgc"
I1213 16:54:58.569426   30603 gc_controller.go:76] Starting GC controller
I1213 16:54:58.569469   30603 controller_utils.go:1025] Waiting for caches to sync for GC controller
I1213 16:54:58.597315   30603 start_master.go:690] Started "openshift.io/cluster-quota-reconciliation"
I1213 16:54:58.597723   30603 clusterquotamapping.go:160] Starting ClusterQuotaMappingController controller
I1213 16:54:58.648388   30603 start_master.go:690] Started "openshift.io/deploymentconfig"
I1213 16:54:58.648550   30603 factory.go:79] Starting deploymentconfig controller
I1213 16:54:58.686406   30603 controllermanager.go:466] Started "namespace"
I1213 16:54:58.686589   30603 controller_utils.go:1025] Waiting for caches to sync for namespace controller
I1213 16:54:58.730320   30603 start_master.go:690] Started "openshift.io/image-trigger"
I1213 16:54:58.730485   30603 image_trigger_controller.go:214] Starting trigger controller
I1213 16:54:58.734605   30603 controllermanager.go:466] Started "deployment"
I1213 16:54:58.734764   30603 deployment_controller.go:152] Starting deployment controller
I1213 16:54:58.734797   30603 controller_utils.go:1025] Waiting for caches to sync for deployment controller
I1213 16:54:58.768592   30603 start_master.go:690] Started "openshift.io/image-signature-import"
I1213 16:54:58.773118   30603 controllermanager.go:466] Started "replicaset"
I1213 16:54:58.773291   30603 replica_set.go:157] Starting replica set controller
I1213 16:54:58.773320   30603 controller_utils.go:1025] Waiting for caches to sync for replica set controller
I1213 16:54:58.810283   30603 controllermanager.go:466] Started "cronjob"
W1213 16:54:58.810315   30603 controllermanager.go:450] "bootstrapsigner" is disabled
I1213 16:54:58.810452   30603 cronjob_controller.go:99] Starting CronJob Manager
I1213 16:54:58.893640   30603 start_master.go:690] Started "openshift.io/resourcequota"
W1213 16:54:58.893688   30603 start_master.go:687] Skipping "openshift.io/sdn"
I1213 16:54:58.893968   30603 resource_quota_controller.go:237] Starting resource quota controller
I1213 16:54:58.894008   30603 controller_utils.go:1025] Waiting for caches to sync for resource quota controller
I1213 16:54:58.920448   30603 controller_utils.go:1025] Waiting for caches to sync for scheduler controller
I1213 16:54:58.979850   30603 controllermanager.go:466] Started "endpoint"
I1213 16:54:58.980128   30603 endpoints_controller.go:144] Starting endpoint controller
I1213 16:54:58.980161   30603 controller_utils.go:1025] Waiting for caches to sync for endpoint controller
I1213 16:54:59.033512   30603 controller_utils.go:1032] Caches are synced for scheduler controller
I1213 16:54:59.033602   30603 leaderelection.go:179] attempting to acquire leader lease...
I1213 16:54:59.070597   30603 leaderelection.go:189] successfully acquired lease kube-system/kube-scheduler
I1213 16:54:59.071152   30603 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"kube-scheduler", UID:"010dd9d5-dfb4-11e7-b616-52540014cdd1", APIVersion:"v1", ResourceVersion:"1359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' server33 became leader
E1213 16:54:59.235200   30603 util.go:45] Metric for serviceaccount_controller already registered
I1213 16:54:59.235316   30603 controllermanager.go:466] Started "serviceaccount"
I1213 16:54:59.235478   30603 serviceaccounts_controller.go:113] Starting service account controller
I1213 16:54:59.235512   30603 controller_utils.go:1025] Waiting for caches to sync for service account controller
I1213 16:54:59.281421   30603 controllermanager.go:466] Started "statefulset"
W1213 16:54:59.281463   30603 core.go:78] Unsuccessful parsing of cluster CIDR : invalid CIDR address: 
W1213 16:54:59.281479   30603 core.go:82] Unsuccessful parsing of service CIDR : invalid CIDR address: 
I1213 16:54:59.281630   30603 stateful_set.go:147] Starting stateful set controller
I1213 16:54:59.281678   30603 controller_utils.go:1025] Waiting for caches to sync for stateful set controller
I1213 16:54:59.298696   30603 start_master.go:690] Started "openshift.io/unidling"
I1213 16:54:59.328862   30603 nodecontroller.go:224] Sending events to api server.
I1213 16:54:59.329050   30603 taint_controller.go:159] Sending events to api server.
I1213 16:54:59.329131   30603 controllermanager.go:466] Started "node"
W1213 16:54:59.329151   30603 core.go:116] Unsuccessful parsing of cluster CIDR : invalid CIDR address: 
I1213 16:54:59.329163   30603 core.go:132] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W1213 16:54:59.329171   30603 controllermanager.go:463] Skipping "route"
I1213 16:54:59.329305   30603 nodecontroller.go:481] Starting node controller
I1213 16:54:59.329331   30603 controller_utils.go:1025] Waiting for caches to sync for node controller
E1213 16:54:59.341681   30603 util.go:45] Metric for serviceaccount_controller already registered
I1213 16:54:59.341745   30603 start_master.go:690] Started "openshift.io/serviceaccount"
I1213 16:54:59.341896   30603 serviceaccounts_controller.go:113] Starting service account controller
I1213 16:54:59.341922   30603 controller_utils.go:1025] Waiting for caches to sync for service account controller
I1213 16:54:59.370589   30603 controllermanager.go:466] Started "persistentvolume-binder"
I1213 16:54:59.370754   30603 pv_controller_base.go:270] Starting persistent volume controller
I1213 16:54:59.370781   30603 controller_utils.go:1025] Waiting for caches to sync for persistent volume controller
I1213 16:54:59.408165   30603 controllermanager.go:466] Started "resourcequota"
I1213 16:54:59.408353   30603 resource_quota_controller.go:237] Starting resource quota controller
I1213 16:54:59.408381   30603 controller_utils.go:1025] Waiting for caches to sync for resource quota controller
I1213 16:54:59.445384   30603 controllermanager.go:466] Started "daemonset"
I1213 16:54:59.445563   30603 daemoncontroller.go:222] Starting daemon sets controller
I1213 16:54:59.445591   30603 controller_utils.go:1025] Waiting for caches to sync for daemon sets controller
I1213 16:54:59.483244   30603 start_master.go:690] Started "openshift.io/build"
I1213 16:54:59.487155   30603 controllermanager.go:466] Started "disruption"
W1213 16:54:59.487180   30603 controllermanager.go:463] Skipping "csrsigning"
I1213 16:54:59.487605   30603 disruption.go:297] Starting disruption controller
I1213 16:54:59.487622   30603 controller_utils.go:1025] Waiting for caches to sync for disruption controller
I1213 16:54:59.519584   30603 start_master.go:690] Started "openshift.io/deployer"
I1213 16:54:59.519749   30603 factory.go:76] Starting deployer controller
I1213 16:54:59.528485   30603 controllermanager.go:466] Started "csrapproving"
W1213 16:54:59.528508   30603 controllermanager.go:450] "tokencleaner" is disabled
I1213 16:54:59.528648   30603 certificate_controller.go:110] Starting certificate controller
I1213 16:54:59.528677   30603 controller_utils.go:1025] Waiting for caches to sync for certificate controller
I1213 16:54:59.543017   30603 imagestream_controller.go:59] Starting image stream controller
I1213 16:54:59.568905   30603 start_master.go:690] Started "openshift.io/image-import"
I1213 16:54:59.569080   30603 scheduled_image_controller.go:59] Starting scheduled import controller
I1213 16:54:59.569997   30603 controllermanager.go:466] Started "replicationcontroller"
I1213 16:54:59.570277   30603 replication_controller.go:152] Starting RC controller
I1213 16:54:59.570306   30603 controller_utils.go:1025] Waiting for caches to sync for RC controller
I1213 16:54:59.664759   30603 start_master.go:690] Started "openshift.io/templateinstance"
I1213 16:54:59.688540   30603 start_master.go:690] Started "openshift.io/ingress-ip"
I1213 16:54:59.707525   30603 start_master.go:690] Started "openshift.io/serviceaccount-pull-secrets"
I1213 16:54:59.751119   30603 controllermanager.go:466] Started "garbagecollector"
I1213 16:54:59.751221   30603 garbagecollector.go:126] Starting garbage collector controller
I1213 16:54:59.751300   30603 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
I1213 16:54:59.757824   30603 start_master.go:690] Started "openshift.io/origin-namespace"
W1213 16:54:59.800289   30603 shared_informer.go:298] resyncPeriod 120000000000 is smaller than resyncCheckPeriod 600000000000 and the informer has already started. Changing it to 600000000000
I1213 16:54:59.816801   30603 start_master.go:690] Started "openshift.io/service-serving-cert"
I1213 16:54:59.944534   30603 start_master.go:690] Started "openshift.io/build-config-change"
I1213 16:54:59.944569   30603 start_master.go:693] Started Origin Controllers
I1213 16:55:00.150691   30603 factory.go:83] Deployer controller caches are synced. Starting workers.
E1213 16:55:00.157057   30603 actual_state_of_world.go:478] Failed to set statusUpdateNeeded to needed true because nodeName="server33"  does not exist
E1213 16:55:00.157071   30603 actual_state_of_world.go:492] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="server33"  does not exist
I1213 16:55:00.188509   30603 controller_utils.go:1032] Caches are synced for GC controller
I1213 16:55:00.188754   30603 controller_utils.go:1032] Caches are synced for RC controller
I1213 16:55:00.192576   30603 controllermanager.go:466] Started "job"
W1213 16:55:00.192592   30603 controllermanager.go:450] "horizontalpodautoscaling" is disabled
W1213 16:55:00.192599   30603 controllermanager.go:450] "ttl" is disabled
I1213 16:55:00.192633   30603 jobcontroller.go:134] Starting job controller
I1213 16:55:00.192654   30603 controller_utils.go:1025] Waiting for caches to sync for job controller
I1213 16:55:00.204698   30603 controller_utils.go:1032] Caches are synced for replica set controller
I1213 16:55:00.204750   30603 controller_utils.go:1032] Caches are synced for endpoint controller
I1213 16:55:00.204790   30603 controller_utils.go:1032] Caches are synced for stateful set controller
I1213 16:55:00.204827   30603 build_controller.go:243] Starting build controller
I1213 16:55:00.204868   30603 controller_utils.go:1032] Caches are synced for disruption controller
I1213 16:55:00.204876   30603 disruption.go:305] Sending events to api server.
I1213 16:55:00.209270   30603 controller_utils.go:1032] Caches are synced for resource quota controller
I1213 16:55:00.212732   30603 controller_utils.go:1032] Caches are synced for resource quota controller
I1213 16:55:00.212810   30603 controller_utils.go:1032] Caches are synced for HPA controller
I1213 16:55:00.234012   30603 controller_utils.go:1032] Caches are synced for attach detach controller
I1213 16:55:00.239822   30603 controller_utils.go:1032] Caches are synced for certificate controller
I1213 16:55:00.239865   30603 controller_utils.go:1032] Caches are synced for node controller
I1213 16:55:00.241766   30603 nodecontroller.go:542] Initializing eviction metric for zone: 
W1213 16:55:00.241894   30603 nodecontroller.go:877] Missing timestamp for Node server33. Assuming now as a timestamp.
I1213 16:55:00.241961   30603 nodecontroller.go:793] NodeController detected that zone  is now in state Normal.
I1213 16:55:00.242424   30603 taint_controller.go:182] Starting NoExecuteTaintManager
I1213 16:55:00.242961   30603 event.go:218] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"server33", UID:"fd0dc8a1-dfb3-11e7-b616-52540014cdd1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node server33 event: Registered Node server33 in NodeController
I1213 16:55:00.245725   30603 controller_utils.go:1032] Caches are synced for daemon sets controller
I1213 16:55:00.250600   30603 controller_utils.go:1032] Caches are synced for deployment controller
I1213 16:55:00.250652   30603 controller_utils.go:1032] Caches are synced for service account controller
I1213 16:55:00.250721   30603 controller_utils.go:1032] Caches are synced for service account controller
I1213 16:55:00.250813   30603 buildconfig_controller.go:185] Starting buildconfig controller
I1213 16:55:00.257509   30603 controller_utils.go:1032] Caches are synced for garbage collector controller
I1213 16:55:00.257524   30603 garbagecollector.go:135] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1213 16:55:00.259863   30603 factory.go:86] deploymentconfig controller caches are synced. Starting workers.
I1213 16:55:00.285164   30603 controller_utils.go:1032] Caches are synced for persistent volume controller
I1213 16:55:00.290354   30603 controller_utils.go:1032] Caches are synced for namespace controller
I1213 16:55:00.298098   30603 controller_utils.go:1032] Caches are synced for job controller
E1213 16:55:26.130406   30603 dnsmasq.go:105] unable to periodically refresh dnsmasq status: The name uk.org.thekelleys.dnsmasq was not provided by any .service files
E1213 16:55:56.158886   30603 dnsmasq.go:105] unable to periodically refresh dnsmasq status: The name uk.org.thekelleys.dnsmasq was not provided by any .service files
E1213 16:56:26.181248   30603 dnsmasq.go:105] unable to periodically refresh dnsmasq status: The name uk.org.thekelleys.dnsmasq was not provided by any .service files

Expected Result

Additional Information

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 3
  • Comments: 28 (3 by maintainers)

Most upvoted comments

One thing for sure. None of the versions of openshift get installed properly. No idea whether thorough testing was done. Neither “oc cluster up” nor “openshift start” works. Also there is no clear documentation available. I tried 3.7, 3.10, 3.11. But no success. Hence gave up the idea of openshift. Instead I am going to install Kubernetes or Apache Mesos.

I might have found out the solution.

As stated in the documentation ( https://docs.openshift.com/enterprise/3.2/install_config/install/prerequisites.html ), dnsmasq is one of the pre-requisites for the Openshift installation.

Quick guide to install it in you system:

atomic host install dnsmasq ( for atomic distros ) or dnf install dnsmasq

then run systemctl start dnsmasq The configuration for dnsmasq is located at /etc/dnsmasq.conf

You might also need to use different parameters when running openshift. opensifht start --help will help

Example ( unable to confirm yet) : openshift start --dns='tcp://0.0.0.0:8053'

Disclaimer:

My instance is not running yet because of other reasons, but I am no longer heaving problems with dnsmasq.

+1

here is version info:

~# openshift version
openshift v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62
etcd 3.2.8

system info

~# cat /etc/os-release 
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial