minikube: none driver can't be used with non-Docker runtimes (looks for "docker" executable)

There is something wrong in the runtime detection of the precreate:

$ sudo minikube start --vm-driver=none --container-runtime=cri-o
šŸ˜„  minikube v1.4.0 on Ubuntu 16.04
🤹  Running on localhost (CPUs=4, Memory=7800MB, Disk=138379MB) ...
šŸ”„  Retriable failure: create: precreate: exec: "docker": executable file not found in $PATH
🤹  Running on localhost (CPUs=4, Memory=7800MB, Disk=138379MB) ...
šŸ”„  Retriable failure: create: precreate: exec: "docker": executable file not found in $PATH
🤹  Running on localhost (CPUs=4, Memory=7800MB, Disk=138379MB) ...
šŸ”„  Retriable failure: create: precreate: exec: "docker": executable file not found in $PATH
^C

For some reason it is calling the wrong Available function ?

func (r *Docker) Available() error {
	_, err := exec.LookPath("docker")
	return err
}
func (r *CRIO) Available() error {
	return r.Runner.Run("command -v crio")
}

And the docker runtime seems to be checking locally ? (I guess us using docker machine makes it always there)

It also forgot to look for crictl, but that is another story. (and interesting how this is regarded as a ā€œretriable failureā€)

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 1
  • Comments: 15 (2 by maintainers)

Most upvoted comments

This issue has to do with how the none driver is designed, relative to how createHost[pkg/minikube/machine/start.go] is designed:


func createHost(api libmachine.API, cfg *config.ClusterConfig, n *config.Node) (*host.Host, error) {
       // inside this function we have access to the cluster config.
       // ....
	def := registry.Driver(cfg.Driver)
       // ....
	dd, err := def.Config(*cfg, *n)
       // ....
	data, err := json.Marshal(dd)
       // ....

	h, err := api.NewHost(cfg.Driver, data)
       // ...
	if err := timedCreateHost(h, api, cfg.StartHostTimeout); err != nil {
      // .....

timedCreateHost() in turn, calls api.Create, passing the *host.Host, which contains the driver we’ve *initialized" inside api.NewHost.

The thing is that NewHost (which doesn’t have access to the cluster config) does this annoying thing:

	1. def := registry.Driver(drvName)
	2. d := def.Init()
	3. err := json.Unmarshal(rawDriver, d)
	   where rawDriver is the local name for the passed 'data'

After the json.Unmarshal, that’s the driver we’re ending with.

That’s how the none driver looks like:

type Driver struct {
	*drivers.BaseDriver
	*pkgdrivers.CommonDriver
	URL     string
	runtime cruntime.Manager
	exec    command.Runner
}

it relies on runtime and exec, which have to be initialized, but nothing that is to be initialized would pass the marshal/unmarshal thing in api.NewHost.

So what is happening is that when def.Init() is called inside api.NewHost:

// pkg/minikube/registry/drvs/none/none.go
		Init:     func() drivers.Driver { return none.NewDriver(none.Config{}) },

a none.NewDriver call is issued, with an empty none.Config…

func NewDriver(c Config) *Driver {
	runtime, err := cruntime.New(cruntime.Config{Type: c.ContainerRuntime, Runner: runner})
        // ...
	return &Driver{
               // ...
		runtime: runtime,
	}
}

(makes sense, 'cause we shouldn’t know anything about the driver during initialization… that’s what config should be for)

So that we’re initializing a cruntime with ac.ContainerRuntime == ā€œā€, which defaults to docker: case "", "docker":, inside pkg/minikube/cruntime/cruntime.go So no container-runtime config should work with the none driver during this step.

And after our unmarshal… (raw json looking like this:

"{\"IPAddress\":\"\",\"MachineName\":\"minikube\",\"SSHUser\":\"\",\"SSHPort\":0,\"SSHKeyPath\":\"\",\"StorePath\":\"/home/ubuntu/.minikube\",\"SwarmMaster\":false,\"SwarmHost\":\"\",\"SwarmDiscovery\":\"\",\"URL\":\"\"}"

…the runtime field of the struct is not even exported, so there is no way even for the runtime config to be passed on.)

the runtime remains docker.