minikube: Localkube crashing: "Connection reset by peer"
This is a BUG REPORT
Minikube version : v0.17.1 Environment:
- OS : MacOS 10.12.3
- VM Driver: virtualbox
- ISO version: minikube-v1.0.7.iso
What happened: When running minikube, my node.js application is failing fairly regularly (~ every 15-30min), printing the error:
error: read tcp 192.168.99.1:50064->192.168.99.100:8443: read: connection reset by peer
When I then run, for example, kubectl get pods, I get the message
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
minikube status prints:
minikubeVM: Running
localkube: Stopped
In order to get things back up and running, I need to run minikube start (which for some reason takes several minutes) — though at this point the networking and name resolution between different services is broken (e.g., nginx can’t discover the nodejs app), and the only practical resolution is to restart all of my kubernetes services.
What you expected to happen: Minikube and localkube should persists until they are explicitly stopped.
How to reproduce it (as minimally and precisely as possible): This is the hardest part — sometimes I get crashes every 5 minutes, sometimes it goes for hours without any problem, and crashes seem to be independent of my development behavior. This is affecting all four developers on our team, who all have fairly similar setups. I’ve tried downgrading all the way to v0.13 with no luck.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 43 (23 by maintainers)
Commits related to this issue
- Add warning when no groupMeta exists for verison Reference: https://github.com/kubernetes/kubernetes/pull/44771 Fixes https://github.com/kubernetes/minikube/issues/1252 TPRs are incorrectly coupled... — committed to r2d4/minikube by r2d4 7 years ago
- Add warning when no groupMeta exists for verison Reference: https://github.com/kubernetes/kubernetes/pull/44771 Fixes https://github.com/kubernetes/minikube/issues/1252 TPRs are incorrectly coupled... — committed to dalehamel/minikube by r2d4 7 years ago
I don’t think this is related. I believe my issue is caused when dynamic memory is turned on (Hyper-V). If I turn off dynamic memory then I don’t seem to have a problem.
I noticed the following in the event viewer:
'minikube' has encountered a fatal error. The guest operating system reported that it failed with the following error codes: ErrorCode0: 0x7F2454576109, ErrorCode1: 0x40000000, ErrorCode2: 0x1686F10, ErrorCode3: 0x7F245366F5C0, ErrorCode4: 0x7F24547B9548. If the problem persists, contact Product Support for the guest operating system. (Virtual machine ID D10E910C-6528-42CC-AA19-7378D1071A91)The VM automatically restarts after which localkube is not running. This happens at around the 1’45" mark after
minikube start(I realise this could / would be different as it is dependent on hardware that the VM is running on). Memory allocation goes from 2048 MB to 3840 MB, lasts for 10 seconds and then the VM restarts.** disclaimer - I don’t yet use minkube / k8s in anger as I’m still learning how to use it.
Sorry, there was a slight copy paste error with my patch, the fix will be in the next release
https://github.com/kubernetes/minikube/pull/1497
Thank you @DenisBiondic . Disabling dynamic memory on Hyper-V seems to have fixed the problem, as I can now use the ingress addon. I did have an issue where localkube had stopped, but I had shut down my laptop and turned it back on with it plugged into a docking station that has Ethernet. The Primary Virtual Switch was setup to point to the WiFi adapter.
I was experiencing this too, intermittently, with minikube v0.17.1 and kube 1.5.3. Bumped up to minikube v0.18 with kube 1.6, and it seems to be resolved (at least, I haven’t seen this happen since the upgrade).
Note that the panicking code path (kube-controller-manager -->
StartControllers()-->APIRegistrationManager.RESTMapper()) doesn’t exist in the vendored k8s packages in v0.18, so it’s a good bet that it’s indeed resolved.