kubevirt: Unable to compile and run KubeVirt locally for development

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

I’m following the Getting Started docs to run KubeVirt locally so I can build and contribute. But, the make cluster-up command didn’t work as expected. I tried it on Mac and Ubuntu. None of them worked for me.

Looks like the error is related to SSH connection (Could not establish a ssh connection to the VM...).

Note: All information/logs below were from my Ubuntu environment.

What you expected to happen:

As per the docs page, I was expecting:

a virtual machine called node01 which acts as node and master

How to reproduce it (as minimally and precisely as possible):

$ export KUBEVIRT_PROVIDER=k8s-1.18
$ make cluster-up

Anything else we need to know?:

Full logs:

$ export KUBEVIRT_PROVIDER=k8s-1.18

$ make cluster-up
./cluster-up/up.sh
Unable to find image 'quay.io/kubevirtci/gocli:2103261553-31913e9' locally
2103261553-31913e9: Pulling from kubevirtci/gocli
f21f65346f8c: Pull complete
Digest: sha256:725a004f4b0024cd5bee7aba2a07ecad68c9d2c5166d5a4057c91914477d8c49
Status: Downloaded newer image for quay.io/kubevirtci/gocli:2103261553-31913e9
Download the image quay.io/kubevirtci/k8s-1.18:2103261553-31913e9
time="2021-04-13T16:25:57Z" level=info msg="Using remote image quay.io/kubevirtci/k8s-1.18:2103261553-31913e9"
Downloading ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
time="2021-04-13T16:30:44Z" level=info msg="Using remote image library/registry:2.7.1"
Downloading ..................................................
time="2021-04-13T16:30:47Z" level=info msg="waiting for node to come up"
2021/04/13 16:30:47 Waiting for host: 192.168.66.101:22
time="2021-04-13T16:30:47Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="ssh.sh echo VM is up failed"
time="2021-04-13T16:30:48Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:48Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:49Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:49Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:50Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:50Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:51Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:51Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:52Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:52Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:53Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:53Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:54Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:54Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:55Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:55Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"
time="2021-04-13T16:30:56Z" level=info msg="waiting for node to come up"
time="2021-04-13T16:30:56Z" level=warning msg="Could not establish a ssh connection to the VM, retrying ..." error="waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running"

===== bb39d88e73e7f9bd307b60654892d3907198a1cf887708e1ac7bf4b22877fbb8 ====
 + NUM_NODES=1
 + NUM_SECONDARY_NICS=0
 + ip link add br0 type bridge

 + echo 0

 + echo 1
 + ip link set dev br0 up
( + ip addr add dev br0 192.168.66.02/24
% + ip -6 addr add fd00::1/64 dev br0

 ++ seq 1 0

 ++ seq 1 1
" + for i in $(seq 1 ${NUM_NODES})
 ++ printf %02d 1
 + n=01

 ++ whoami
. + ip tuntap add dev tap01 mode tap user root
  + ip link set tap01 master br0
 + ip link set dev tap01 up
Z + DHCP_HOSTS=' --dhcp-host=52:55:00:d1:55:01,192.168.66.101,[fd00::101],node01,infinite'

 ++ seq 1 0
8 + iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
L + iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
0 + iptables -A FORWARD -i br0 -o eth0 -j ACCEPT
9 + ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
M + ip6tables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
1 + ip6tables -A FORWARD -i br0 -o eth0 -j ACCEPT
� + exec dnsmasq --interface=br0 --enable-ra -d '--dhcp-host=52:55:00:d1:55:01,192.168.66.101,[fd00::101],node01,infinite' --dhcp-range=192.168.66.10,192.168.66.200,infinite --dhcp-range=::10,::200,constructor:br0,static
. dnsmasq: started, version 2.80 cachesize 150
� dnsmasq: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify dumpfile
S dnsmasq-dhcp: DHCP, IP range 192.168.66.10 -- 192.168.66.200, lease time infinite
T dnsmasq-dhcp: DHCPv6, static leases only on ::200, lease time 1h, template for br0
+ dnsmasq-dhcp: router advertisement on br0
[ dnsmasq-dhcp: DHCPv6, static leases only on fd00::200, lease time 1h, constructed for br0
C dnsmasq-dhcp: router advertisement on fd00::, constructed for br0
1 dnsmasq-dhcp: IPv6 router advertisement enabled
# dnsmasq: reading /etc/resolv.conf
& dnsmasq: using nameserver 8.8.4.4#53
& dnsmasq: using nameserver 8.8.8.8#53
) dnsmasq: read /etc/hosts - 10 addresses

===== ba9c0dd8b6e757d0fbe429222a0614f2f9d0ef4a92000e3518b6e2ab555c4212 ====
� time="2021-04-13T16:30:46.67612569Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.11.2 instance.id=362f4505-55fe-456a-b13a-1e24f804bab9 service=registry version=v2.7.1
� time="2021-04-13T16:30:46.676335429Z" level=info msg="redis not configured" go.version=go1.11.2 instance.id=362f4505-55fe-456a-b13a-1e24f804bab9 service=registry version=v2.7.1
� time="2021-04-13T16:30:46.68486348Z" level=info msg="Starting upload purge in 18m0s" go.version=go1.11.2 instance.id=362f4505-55fe-456a-b13a-1e24f804bab9 service=registry version=v2.7.1
� time="2021-04-13T16:30:46.68664069Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.11.2 instance.id=362f4505-55fe-456a-b13a-1e24f804bab9 service=registry version=v2.7.1
� time="2021-04-13T16:30:46.686801822Z" level=info msg="listening on [::]:5000" go.version=go1.11.2 instance.id=362f4505-55fe-456a-b13a-1e24f804bab9 service=registry version=v2.7.1

===== c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f ====
 + PROVISION=false
 + MEMORY=3096M
	 + CPU=2
 + QEMU_ARGS=
 + NEXT_DISK=
 + BLOCK_DEV=
 + BLOCK_DEV_SIZE=
 + true
 + case "$1" in
& + NEXT_DISK=/var/run/disk/disk.qcow2

 + shift 2
 + true
 + case "$1" in
 + MEMORY=5120M

 + shift 2
 + true
 + case "$1" in
	 + CPU=6

 + shift 2
 + true
 + case "$1" in
 + QEMU_ARGS=' -serial pty'

 + shift 2
 + true
 + case "$1" in
	 + break
 + NODE_NUM=01
 ++ printf %02d 1
 + n=01
 + cat
# + chmod u+x /usr/local/bin/ssh.sh
 + echo done
 + sleep 0.1
 + ip link show tap01
[ + iptables -t nat -A POSTROUTING '!' -s 192.168.66.0/16 --out-interface br0 -j MASQUERADE
5 + iptables -A FORWARD --in-interface eth0 -j ACCEPT
� 3: tap01: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master br0 state DOWN mode DEFAULT group default qlen 1000
8     link/ether 92:7f:2f:21:fd:53 brd ff:ff:ff:ff:ff:ff
o + iptables -t nat -A PREROUTING -p tcp -i eth0 -m tcp --dport 2201 -j DNAT --to-destination 192.168.66.101:22
 + '[' 01 = 01 ']'
q + iptables -t nat -A PREROUTING -p tcp -i eth0 -m tcp --dport 6443 -j DNAT --to-destination 192.168.66.101:6443
q + iptables -t nat -A PREROUTING -p tcp -i eth0 -m tcp --dport 8443 -j DNAT --to-destination 192.168.66.101:8443
m + iptables -t nat -A PREROUTING -p tcp -i eth0 -m tcp --dport 80 -j DNAT --to-destination 192.168.66.101:80
o + iptables -t nat -A PREROUTING -p tcp -i eth0 -m tcp --dport 443 -j DNAT --to-destination 192.168.66.101:443
  + '[' -f provisioned.qcow2 ']'
 + calc_next_disk
" ++ sed -e s/disk// -e s/.qcow2//

 ++ head -1
 ++ ls -t disk01.qcow2

 + last=01

 + last=01

 + next=2
 ++ printf /disk%02d.qcow2 2
 + next=/disk02.qcow2
' + '[' -n /var/run/disk/disk.qcow2 ']'
! + next=/var/run/disk/disk.qcow2
 + '[' 01 = 00 ']'
 ++ printf /disk%02d.qcow2 01
 + last=/disk01.qcow2
! + default_disk_size=37580963840
 ++ jq '.["virtual-size"]'
. ++ qemu-img info --output json /disk01.qcow2
 + disk_size=37580963840
' + '[' 37580963840 -lt 37580963840 ']'
b + echo 'Creating disk "/var/run/disk/disk.qcow2 backed by /disk01.qcow2 with size 37580963840".'
Y Creating disk "/var/run/disk/disk.qcow2 backed by /disk01.qcow2 with size 37580963840".
_ + qemu-img create -f qcow2 -o backing_file=/disk01.qcow2 /var/run/disk/disk.qcow2 37580963840
� Formatting '/var/run/disk/disk.qcow2', fmt=qcow2 size=37580963840 backing_file=/disk01.qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16

 + echo ''

8 + echo 'SSH will be available on container port 2201.'
/ SSH will be available on container port 2201.
8 + echo 'VNC will be available on container port 5901.'
/ VNC will be available on container port 5901.
@ + echo 'VM MAC in the guest network will be 52:55:00:d1:55:01'
7 VM MAC in the guest network will be 52:55:00:d1:55:01
< + echo 'VM IP in the guest network will be 192.168.66.101'
3 VM IP in the guest network will be 192.168.66.101
% + echo 'VM hostname will be node01'
 VM hostname will be node01
 + '[' '!' -e /dev/kvm ']'
 + export QEMU_AUDIO_DRV=none
 + QEMU_AUDIO_DRV=none
 + block_dev_arg=
 + '[' -n '' ']'
� + exec qemu-system-x86_64 -enable-kvm -drive format=qcow2,file=/var/run/disk/disk.qcow2,if=virtio,cache=unsafe -device virtio-net-pci,netdev=network0,mac=52:55:00:d1:55:01 -netdev tap,id=network0,ifname=tap01,script=no,downscript=no -device virtio-rng-pci -vnc :01 -cpu host -m 5120M -smp 6 -serial pty -serial pty -M q35,accel=kvm,kernel_irqchip=split -device intel-iommu,intremap=on,caching-mode=on -soundhw hda
6 char device redirected to /dev/pts/2 (label serial0)
6 char device redirected to /dev/pts/3 (label serial1)
Q qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory
volume: k8s-1.18-node01
could not establish a connection to the node after a generous timeout: waiting for node to come up failed: Error response from daemon: Container c0e424dc172a5a011c56fb46ece7111d5a3b0d8b63210810108720bc1ec5251f is not running
Makefile:122: recipe for target 'cluster-up' failed
make: *** [cluster-up] Error 1

Environment:

  • KubeVirt version (use virtctl version): Not installed
  • Kubernetes version (use kubectl version): Not installed/not using any k8s cluster (if I understand the docs correctly, all I need is just Docker)
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: Civo.com IaaS
  • OS (e.g. from /etc/os-release):
    NAME="Ubuntu"
    VERSION="18.04.5 LTS (Bionic Beaver)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 18.04.5 LTS"
    VERSION_ID="18.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=bionic
    UBUNTU_CODENAME=bionic
    
  • Kernel (e.g. uname -a):
    Linux kubevirt-qos-46252e17 4.15.0-141-generic #145-Ubuntu SMP Wed Mar 24 18:08:07 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    
  • Install tools:
    $ docker version
    Client: Docker Engine - Community
     Version:           20.10.6
     API version:       1.41
     Go version:        go1.13.15
     Git commit:        370c289
     Built:             Fri Apr  9 22:46:01 2021
     OS/Arch:           linux/amd64
     Context:           default
     Experimental:      true
    
    Server: Docker Engine - Community
     Engine:
      Version:          20.10.6
      API version:      1.41 (minimum version 1.12)
      Go version:       go1.13.15
      Git commit:       8728dd2
      Built:            Fri Apr  9 22:44:13 2021
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          1.4.4
      GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
     runc:
      Version:          1.0.0-rc93
      GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
     docker-init:
      Version:          0.19.0
      GitCommit:        de40ad0
    
  • Others: None

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 18

Most upvoted comments

@vasiliy-ul you are right. I fixed that permission problems and rerun the make cluster-sync command. Now I can see all the Kubevirt APIs in the cluster. Thank you for your help!

$ ./cluster-up/kubectl.sh api-resources | grep kubevirt
cdiconfigs                                                                 cdi.kubevirt.io                false        CDIConfig
cdis                                cdi,cdis                               cdi.kubevirt.io                false        CDI
datavolumes                         dv,dvs                                 cdi.kubevirt.io                true         DataVolume
objecttransfers                     ot,ots                                 cdi.kubevirt.io                false        ObjectTransfer
storageprofiles                                                            cdi.kubevirt.io                false        StorageProfile
kubevirts                           kv,kvs                                 kubevirt.io                    true         KubeVirt
virtualmachineinstancemigrations    vmim,vmims                             kubevirt.io                    true         VirtualMachineInstanceMigration
virtualmachineinstancepresets       vmipreset,vmipresets                   kubevirt.io                    true         VirtualMachineInstancePreset
virtualmachineinstancereplicasets   vmirs,vmirss                           kubevirt.io                    true         VirtualMachineInstanceReplicaSet
virtualmachineinstances             vmi,vmis                               kubevirt.io                    true         VirtualMachineInstance
virtualmachines                     vm,vms                                 kubevirt.io                    true         VirtualMachine
virtualmachinerestores              vmrestore,vmrestores                   snapshot.kubevirt.io           true         VirtualMachineRestore
virtualmachinesnapshotcontents      vmsnapshotcontent,vmsnapshotcontents   snapshot.kubevirt.io           true         VirtualMachineSnapshotContent
virtualmachinesnapshots             vmsnapshot,vmsnapshots                 snapshot.kubevirt.io           true         VirtualMachineSnapshot
uploadtokenrequests                 utr,utrs                               upload.cdi.kubevirt.io         true         UploadTokenRequest