crc: [BUG] Certificates do not renew successfully using crc v1.10
General information
- OS: Linux (fedora 31)
- Hypervisor: KVM
- Did you run
crc setupbefore starting it (Yes/No)? Yes (always) - Running CRC on: Desktop, 8core, 32g memory, nothing else running (accessing remotely)
CRC version
❯ ./crc version
crc version: 1.10.0+9025021
OpenShift version: 4.4.3 (embedded in binary)
Host Operating System
❯ cat /etc/os-release
NAME=Fedora
VERSION="31 (Server Edition)"
ID=fedora
VERSION_ID=31
VERSION_CODENAME=""
PLATFORM_ID="platform:f31"
PRETTY_NAME="Fedora 31 (Server Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:31"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=31
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=31
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Server Edition"
VARIANT_ID=server
Steps to reproduce
- blocked UDP port 123 out of my local gateway
- sudo date -s “9 JULY 2020 11:11:11”
- downloaded/extracted crc-1.10
- crc delete <just in case i had an old one>
- crc setup
- crc start -c 8 -m 25000 …
Expected
Certificates get renewed and cluster comes up.
Actual
Certificates do not get renewed.
Logs
https://gist.github.com/dea1a60dddaf7becd05200d0a2eed3e5
https://gist.github.com/btannous/1ebb796dbb747a2f18294403aea6c15d
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (4 by maintainers)
The certificates expired this morning and it looks like this has had an impact. Any suggestions?
@btannous Thanks! I applied the changes this morning and will test properly when the certs expire at the end of the month.
Turns out this can be done directly for the crc VM with libvirt on linux
Then, in crc XML definitions, a
<filterref>element needs to be added:With that in place, we can run the cluster “in the future” by changing this in the domain xml:
‘adjustment’ being a value in seconds. I need to experiment a bit more with this, as to exercise the cert recovery code, we probably need to change the time on the host too, and with NTP blocked, the cluster will probably sync with the host time without needing any changes to that
<clock>element.We are currently investigating this …
I can confirm that this issue does not exist in 1.9.0 because this deploys fine from scratch on my environment. When performing the same using version 1.10.0 crc start fails consistently.
Logs https://github.com/code-ready/crc/files/4717841/logs.txt
@btannous Have the same issue and been fighting it all today … Bit of debugging shows …
/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-podis created during the start of the recovery pod hereTherefore, once the recovery pod is destroyed the config directory is removed as well.
It is possible to run the command, manually, that created the pod again but the new certificates generated in the
admin.kubeconfigare not signed by the correct CA so running anyoccommand after just errors out withunknown certificate-authority errors.